<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.nomorebluescreen.de/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sunflower</id>
	<title>Linuxwiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.nomorebluescreen.de/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sunflower"/>
	<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Special:Contributions/Sunflower"/>
	<updated>2026-05-05T13:08:55Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=675</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=675"/>
		<updated>2026-04-27T14:30:05Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* pvc */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
updated for version: 4.19&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
(in common)&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(--api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)         (core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
= Nodes =&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Draining nodes ==&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary when you drain it s. below - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Machines =&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Machinesets ==&lt;br /&gt;
&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
== Delete and re-create machines/nodes ==&lt;br /&gt;
 oc get machines -A | grep worker-&amp;lt;XY&amp;gt; | wc -l&lt;br /&gt;
-&amp;gt; MACHINECOUNT&lt;br /&gt;
 oc annotate machine/&amp;lt;machine-name&amp;gt; -n openshift-machine-api machine.openshift.io/delete-machine=&amp;quot;true&amp;quot;&lt;br /&gt;
 oc scale --replicas=&amp;lt;$MACHINECOUNT+1&amp;gt; machineset &amp;lt;machineset&amp;gt; n openshift-machine-api&lt;br /&gt;
 oc scale --replicas=$MACHINECOUNT machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
= Projects/Namespaces =&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com &lt;br /&gt;
* registry.redhat.io  (with login only)&lt;br /&gt;
* quay.io&lt;br /&gt;
* docker.io&lt;br /&gt;
&lt;br /&gt;
= Images =&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
  skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
  skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
  skopeo inspect --config docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215  docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
== Create pod from image ==&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
= Apps =&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
= Deployments =&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
== Environment variables ==&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1  MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb --from=secret/my-secret (--prefix=MYSQL_)&lt;br /&gt;
&lt;br /&gt;
== Restart deployment after change ==&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt; /var/www/html &amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode rwm|rwo --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make deployment available from inside/outside ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Add probes ==&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
== Autoscale Pods ==&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
== Reduce/Upgrade cpu/mem requests ==&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
== Security ==&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Pods =&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
= Jobs and Cronjobs =&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Secrets =&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
=== Update Secret ===&lt;br /&gt;
 $ oc set data secret/&amp;lt;mysecret&amp;gt; --from-file /tmp/root-password&lt;br /&gt;
&lt;br /&gt;
=== Extract secret ===&lt;br /&gt;
 $ oc extract secret /&amp;lt;mysecret&amp;gt; --to /tmp/mysecret (--confirm)&lt;br /&gt;
&lt;br /&gt;
= Configmaps =&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets (REVIEW!)&lt;br /&gt;
 $ oc get egressips&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Changes with &#039;&#039;&#039;patch&#039;&#039;&#039; command =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=674</id>
		<title>Emailserver mit Postfix und Dovecot</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=674"/>
		<updated>2026-03-18T22:50:09Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* IMAP konfigurieren */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Postfix =&lt;br /&gt;
&lt;br /&gt;
Postfix ist ein MTA (Mail Transfer Agent), der eine gute Alternative zu anderen gängigen  MTAs (Sendmail, Exim) darstellt, da seine Konfiguration gut lesbar ist. &lt;br /&gt;
In unserem Beispiel soll der MTA mit einem IMAPd (Dovecot) verknüpft werden, so dass  Benutzer eine Mailbox direkt auf dem System haben. Das Abholen der Mails erfolgt per IMAPs.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Zunächst muss das Paket „postfix“ installiert werden.&lt;br /&gt;
Dabei sind noch ein paar Fragen zu beantworten:&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Art des Servers:	&#039;&#039;&#039;Internet Site&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Root and postmaster mail recipient:&#039;&#039;&#039; ein Postfach eintragen, z.B.postmaster@example.de &amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Other destinations to accept mail for (blank for none):&#039;&#039;&#039; z.B. mail.example.de, localhost, $mydomain (kann man erstmal die defaults belassen)&lt;br /&gt;
* &#039;&#039;&#039;Force synchronous updates on mail queue:&#039;&#039;&#039; no&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Local subnets:&#039;&#039;&#039; 127.0.0.1/8, 192.168.63.0/24 (hier das eigene Netz ergänzen)&lt;br /&gt;
* &#039;&#039;&#039;Mailbox size limit:&#039;&#039;&#039; 0 (unbegrenzt)&lt;br /&gt;
* &#039;&#039;&#039;Local address extension character:&#039;&#039;&#039;  + (i.a. als default ausreichend)&lt;br /&gt;
* &#039;&#039;&#039;Internet protocols to use:&#039;&#039;&#039;  all (wenn man nicht explizit ipv4 oder ipv6 sprechen will)&lt;br /&gt;
&lt;br /&gt;
Diese Einstellungen lassen sich jederzeit mit&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # dpkg-reconfigure postfix&amp;lt;/console&amp;gt;&lt;br /&gt;
ändern.&lt;br /&gt;
&lt;br /&gt;
Alle relevanten Dateien befinden sich  im Verzeichnis &#039;&#039;/etc/postfix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Bevor wir zur Postfixconfig kommen, überprüfen wir den Inhalt der Datei /etc/mailname:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 $ cat /etc/mailname&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Dort darf &#039;&#039;&#039;nur der Domainname&#039;&#039;&#039; stehen, nicht der Hostname (e.g. example.com). Andernfalls kann das Auswirkungen auf den Emailversand haben, v.a. wenn in der main.cf (s.u.) auf die Datei referenziert wird.&lt;br /&gt;
&lt;br /&gt;
Die wichtigeste Datei zum Anpassen ist zunächst die &#039;&#039;&#039;main.cf&#039;&#039;&#039;. Hier ein Beispiel für den Server „mx“ in der Domain example.de. Folgende Parameter sollten konfiguriert sein (exemplarisch):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 myhostname&lt;br /&gt;
 mydomain&lt;br /&gt;
 myorigin&lt;br /&gt;
 mydestination&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Meistens gibt es schon ein paar brauchbare defaults. Der Parameter &#039;&#039;mynetworks&#039;&#039; erlaubt es bestimmten Netzen, Emails ohne weitere Einschränkungen einzuliefern.&lt;br /&gt;
&lt;br /&gt;
Beispielconfig:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
myhostname = mx01.example.de &lt;br /&gt;
mydomain = example.de &lt;br /&gt;
myorigin = $mydomain &lt;br /&gt;
mydestination = $myhostname, localhost, localhost.$mydomain&lt;br /&gt;
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.99.0/24 [2001:780:11b::/48] 214.94.24.154 [2004:780:8:0:5ff0:c5ff:fe09:98f9]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vor allem der Parameter &#039;&#039;&#039;mynetworks&#039;&#039;&#039; sollte mit Bedacht gewählt werden, denn dies sind alle Hosts und Netze, von denen jegliche Emails angenommen werden (auch von &#039;&#039;und&#039;&#039; nach Fremddomains). Fehlkonfigurationen führen hier schnell zum [https://practical365.com/what-is-an-open-relay OpenRelay].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Beachte:&#039;&#039;&#039; ipv6-Adressen müssen in [eckigen Klammern] geschrieben werden, sonst werden keine Emails ausgeliefert!&lt;br /&gt;
Fehler im Logfile:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
postfix/smtpd[21928]: warning: smtpd_client_event_limit_exceptions: 2a01:40f8:c013:5050::0/64: table lookup problem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Nach jeder Änderung ist der Dienst zu reloaden mit dem Befehl&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ob der Restart ordentlich funktioniert hat, kann man z.B. anhand des Logs überprüfen:&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # tail /var/log/mail.log&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== master.cf ===&lt;br /&gt;
Das Kernstück des Postfixdaemons. Hier werden die Transports festgelegt&lt;br /&gt;
Bedeutung der Spalten:&lt;br /&gt;
* service-Feld: Name des Dienstes (smtp, local, procmail, ...) (str)&lt;br /&gt;
* typ-Feld: Verbindungstyp (inet, fifo, unix) (str)&lt;br /&gt;
* Zugriffsrecht: Zugriff auch für externe Programme (default: y) (bool)&lt;br /&gt;
* unpriv-Feld:  Start als unprivilegierter Benutzer (default: y) oder root (n) (bool)&lt;br /&gt;
* chroot: Soll der Dienst in einer chroot-Umgebung gestartet werden (default: y) (bool)&lt;br /&gt;
* wakeup-Feld: Sekunden zwischen 2 Aufrufen (default: 0) (int)&lt;br /&gt;
* Prozessmaximum: Wie viele Prozesse maximal gleichzeitig (default: 50) (int)&lt;br /&gt;
Danach erfolgt ein Kommando mit Flags und Parametern (optional).&lt;br /&gt;
&lt;br /&gt;
== TLS ==&lt;br /&gt;
Optional kann man mit Zertifikaten verschlüsselte Übertragung von Emails konfigurieren. das funktioniert aber nur dann, wenn der Mailserver der Gegenstelle das Zertifikat auch einbindet. Man kann das Zertifikat auch in einen Mailclient einbinden (s. später).&lt;br /&gt;
Die Zertifikatserzeugung kann mit mit [https://letsencrypt.org letsencrypt] erfolgen. Clients zur Zertifikatserzeugung sind [[Webserver_mit_Apache#Alternative_letsencrypt | certbot ]] oder [[Webserver_mit_Apache#Dehydrated | dehydrated]].&lt;br /&gt;
&lt;br /&gt;
=== Zertifikatsgenerierung in Kürze ===&lt;br /&gt;
 # echo $HOSTNAME &amp;gt; /etc/dehydrated/domains.txt&lt;br /&gt;
 # dehydrated –register –accept-terms&lt;br /&gt;
 # dehydrated -c&lt;br /&gt;
&lt;br /&gt;
Dies setzt allerdings einen [[Webserver_mit_Apache | Webserver]] voraus, der auf Port 80 lauscht. Gibt es diesen nicht, kann mal alternativ letsencrypt via DNS verwenden (https://letsencrypt.org/docs/challenge-types).&lt;br /&gt;
&lt;br /&gt;
=== Alternative eigene CA (nicht empfohlen) ===&lt;br /&gt;
&lt;br /&gt;
Wer unbedingt eine eigene CA betreiben will, kann das mit folgender Anleitung tun. Achtung: Das Vorgehen sollte nur gewählt werden, wenn ein zwingender Grund dafür besteht. Viele Browser und MUAs haben Probleme damit, erzeugen hässliche Warnings oder lassen die Seite nicht zu.&lt;br /&gt;
&lt;br /&gt;
==== CA erstellen ====&lt;br /&gt;
Wenn noch kein Zertifikat vorhanden ist, kann man sich selbst eines erstellen oder einen CSR (Certificate Signing Request) erstellen und diesen an eine offizielle CA schicken. Soll ein kommerziell genutzter Mailserver entstehen, ist dies der realistische Weg.&lt;br /&gt;
&lt;br /&gt;
Achtung: Dieser Schritt wird nicht gebraucht, wenn es schon eine CA gibt.&lt;br /&gt;
&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
Key erstellen (+Passwort dafür vergeben), Zertifkatsrequest für die CA erstellen, CA erstellen&lt;br /&gt;
&lt;br /&gt;
 $ openssl genrsa -out ca.key -des3 4096&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -x509 -days 3650 -key ca.key -out ca.crt&lt;br /&gt;
&lt;br /&gt;
 Enter pass phrase for ca.key: &lt;br /&gt;
 You are about to be asked to enter information that will be incorporated &lt;br /&gt;
 into your certificate request. &lt;br /&gt;
 What you are about to enter is what is called a Distinguished Name or a DN. &lt;br /&gt;
 There are quite a few fields but you can leave some blank &lt;br /&gt;
 For some fields there will be a default value, &lt;br /&gt;
 If you enter &#039;.&#039;, the field will be left blank. &lt;br /&gt;
 ----- &lt;br /&gt;
 Country Name (2 letter code) [AU]:DE &lt;br /&gt;
 State or Province Name (full name) [Some-State]:Bavaria &lt;br /&gt;
 Locality Name (eg, city) []:Nuernberg &lt;br /&gt;
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:example.de &lt;br /&gt;
 Organizational Unit Name (eg, section) []:Hostmaster &lt;br /&gt;
 Common Name (e.g. server FQDN or YOUR name) []:*.example.de &lt;br /&gt;
 Email Address []:postmaster@example.de&lt;br /&gt;
&lt;br /&gt;
==== Zertifikat mit der neuen CA erstellen ====&lt;br /&gt;
Schritte:&lt;br /&gt;
    • Key erstellen&lt;br /&gt;
    • Request erstellen &lt;br /&gt;
    • Zertifikat erstellen und signen&lt;br /&gt;
&lt;br /&gt;
  $ openssl genrsa -out mx.example.de.key 4096&lt;br /&gt;
(kein Passwort festlegen)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -key mx.example.de.key -out mx.example.de.csr&lt;br /&gt;
(wieder das Formular ausfüllen as usual)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl x509 -req -days 365 -in mx.example.de.csr -CA  ca.crt -CAkey ca.key -set_serial 01 -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
Beim Erneuern des Zertifikats fallen die Schritte „Erstellen der CA“ und Erzeugen des Keys weg. Ferner muss man auch keine Serial mehr angeben.&lt;br /&gt;
Der Renew-Befehl lautet also folgendermaßen:&lt;br /&gt;
 $ openssl x509 -req -days 730 -in mx.example.de.csr -CA ca.crt -CAkey ca.key -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
=== Einbinden in die Config-Datei ===&lt;br /&gt;
Dieser Schritt gilt wieder für alle Zertifikate, egal wie sie erzeugt wurden. Die Pfade müssen natürlich entsprechend angepasst werden,&lt;br /&gt;
&lt;br /&gt;
Zertifikate an die entsprechende Stelle kopieren und in der Konfig einbinden:&lt;br /&gt;
&lt;br /&gt;
 smtpd_tls_cert_file=/etc/ssl/certs/mx.example.de.crt &lt;br /&gt;
 smtpd_tls_key_file=/etc/ssl/private/mx.example.de.key &lt;br /&gt;
 smtpd_tls_CAfile=/etc/postfix/ca.crt &lt;br /&gt;
 smtpd_use_tls=yes &lt;br /&gt;
 smtpd_tls_session_cache_database = btree: &lt;br /&gt;
   ${data_directory}/smtpd_cache&lt;br /&gt;
 smtp_tls_session_cache_database = btree:  &lt;br /&gt;
   ${data_directory}/smtp_scache&lt;br /&gt;
&lt;br /&gt;
Der Parameter smtpd_tls_Cafile entfällt, wenn eine offizielle CA vorhanden ist (z.B. letsencrypt).&amp;lt;br&amp;gt;&lt;br /&gt;
Die Pfade zu den Zertifikaten können abweichen, bei letsencrypt liegen diese z.B. unter &#039;&#039;/var/lib/dehydrated/certs/&#039;&#039;.&lt;br /&gt;
Überprüfung:&lt;br /&gt;
 $ openssl s_client -connect mx.example.de:25 -starttls smtp&lt;br /&gt;
oder https://www.checktls.com/&lt;br /&gt;
&lt;br /&gt;
Ergänzung:&lt;br /&gt;
Manche MTAs wollen ein Zertifikat in pfx-Form. Dieses kann man aus dem cert file wie folgt erzeugen:&lt;br /&gt;
 $ openssl pkcs12 -export -out mx.example.de.pfx -inkey mx.example.de.key -in mx.example.de.crt&lt;br /&gt;
pfx-File und Passwort dem User zukommen lassen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
[[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]] „vergisst“ manchmal das neue Zertifikat und behält die alte Version, d.h. die meisten Mailclients spoolen dann keine neuen Emails mehr. In diesem Fall den Dovecot Service neu starten.&lt;br /&gt;
&lt;br /&gt;
== SASL ==&lt;br /&gt;
&lt;br /&gt;
Zur Vermeidung eines Open Relays ist dringend anzuraten, per default nur das Einliefern mit dem Absender @example.de von bestimmten Netzen zu erlauben. Dieses passiert mit dem Parameter &#039;&#039;mynetworks&#039;&#039; (s.o.).&lt;br /&gt;
Nun kann es natürlich passieren, dass Benutzer von einem Mailclient irgendwo im Internet Mails verschicken wollen. Diese wären laut Konfig nicht berechtigt. Da die meisten PCs mit dynamischen Adressen im Internet unterwegs sind, macht es hier auch keinen Sinn, die jeweilige IP-Adresse in der Konfig zu ergänzen.&amp;lt;br&amp;gt;&lt;br /&gt;
Das Problem kann umgangen werden, indem Emails versenden dann erlaubt wird, wenn sich der Benutzer einmal erfolgreich am IMAP-Server authentifiziert hat.&lt;br /&gt;
Hierfür gibt es SASL. Die entsprechenden Eintragungen in der main.cf sind:&lt;br /&gt;
 smtpd_relay_restrictions = permit_mynetworks  &lt;br /&gt;
 permit_sasl_authenticated defer_unauth_destination&lt;br /&gt;
 smtpd_sasl_auth_enable = yes &lt;br /&gt;
 smtpd_sasl_type = dovecot &lt;br /&gt;
 smtpd_sasl_local_domain = $mydomain &lt;br /&gt;
 smtpd_sasl_security_options = noanonymous &lt;br /&gt;
 smtpd_sasl_path = private/auth&lt;br /&gt;
&lt;br /&gt;
Im Mailclient äußert sich das Verhalten so, dass man beim ersten Senden einer Nachricht sein Mailbox-Passwort angeben muss.&lt;br /&gt;
Bevor dieses Feature aktiviert wird, muss es einen IMAP-Server geben (s.  [[#Dovecot|nächstes Kapitel]] ).&lt;br /&gt;
&lt;br /&gt;
== Maps ==&lt;br /&gt;
&lt;br /&gt;
Um besser unterscheiden zu können, was mit welchen Absender-/Zieladressen passiert, wird die Konfiguration in sogenannte „Maps“ aufgeteilt. Diese können als Klartext-File oder als Berkley DB vorliegen. In letzterem Fall müssen diese mit dem Kommando &#039;&#039;&#039;postmap&#039;&#039;&#039; nach jeder Bearbeitung umgewandelt werden.&amp;lt;br&amp;gt;&lt;br /&gt;
Ausnahme: Die Datei /etc/aliases.db (nur relevant für lokale Emailauslieferung) wird mit dem Kommando &#039;&#039;&#039;postalias&#039;&#039;&#039; oder &#039;&#039;&#039;newaliases&#039;&#039;&#039; generiert.&lt;br /&gt;
 alias_maps = hash:/etc/aliases &lt;br /&gt;
 alias_database = hash:/etc/aliases &lt;br /&gt;
Hier werden aliase eingerichtet, die auf eine andere Mailbox mappen. Beispiel:&lt;br /&gt;
 postmaster:    root&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
 smtpd_sender_restrictions = hash:/etc/postfix/access &lt;br /&gt;
Hier können für Aktionen für spezielle Absenderadressen eingerichtet werden. Beispiel:&lt;br /&gt;
 example.com     DISCARD&lt;br /&gt;
&lt;br /&gt;
=== Relocated ===&lt;br /&gt;
 relocated_maps = hash:/etc/postfix/relocated &lt;br /&gt;
&lt;br /&gt;
Abweisen der Mail mit einem Hinweis. Beispiel:&lt;br /&gt;
 testy.test      &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Ergebnis:&lt;br /&gt;
 &amp;lt;testy.test@example.de&amp;gt;: Recipient address rejected: User has &lt;br /&gt;
 moved to &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de&lt;br /&gt;
 senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Canonical ===&lt;br /&gt;
==== Sender ====&lt;br /&gt;
 sender_canonical_maps = hash:/etc/postfix/sender_canonical &lt;br /&gt;
&lt;br /&gt;
Bestimmte Adressen werden auf ein übliches Standardformat umgeschrieben:&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   petra.sonne@example.de&lt;br /&gt;
 phun@work.de		   peter.hun@example.de&lt;br /&gt;
&lt;br /&gt;
==== Recipient ====&lt;br /&gt;
 recipient_canonical_maps = hash:/etc/postfix/recipient_canonical &lt;br /&gt;
Arbeitet genauso wie sender_canonical, nur für Empfängeradressen.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Mailbox ===&lt;br /&gt;
 virtual_mailbox_maps = hash:/etc/postfix/virtual&lt;br /&gt;
Locations der Mailboxen des imap-Servers (näheres unter [[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]])&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   	example.de/sunflower/ &lt;br /&gt;
 testy@example.de		example.de/testy &lt;br /&gt;
 test@example.de		example.de/test &lt;br /&gt;
 lmaa@ihr-koennt-mich-alle.de	ihr-koennt-mich-alle.de/lmaa&lt;br /&gt;
&lt;br /&gt;
=== Virtual Aliases ===&lt;br /&gt;
virtual_alias_maps = hash:/etc/postfix/virtual_maps&lt;br /&gt;
&lt;br /&gt;
Adressen die auf andere Adressen umgebogen werden (ähnlich wie die aliases), kann auch domainübergreifend passieren.&lt;br /&gt;
So können mehrere Empfängeradressen in dieselbe Mailbox laufen.&lt;br /&gt;
&lt;br /&gt;
 anrufbeantworter@example.de      sunflower@example.de,H.Hirsch@gmx.de,harry1999@yahoo.de&lt;br /&gt;
 info@example.de        sunflower@example.de&lt;br /&gt;
 postmaster    root &lt;br /&gt;
 webmaster     root &lt;br /&gt;
 administrator root&lt;br /&gt;
 root    sunflower&lt;br /&gt;
fortune:  fortune&lt;br /&gt;
Letzteres ist eine Pipe. Dazu später mehr.&lt;br /&gt;
&lt;br /&gt;
Umwandeln von Text in DB-File und in Postfix einlesen:&lt;br /&gt;
 postmap &amp;lt;aliases|access|canonical|...&amp;gt;&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Einfaches Beispiel: Emails von einer Domain auf eine andere weiterleiten ===&lt;br /&gt;
Nehmen wir an, wir haben einen Emailserver1 in der Domain example.com. Dieser soll alle Email die an &amp;lt;userXY&amp;gt;@example.com eintreffen, an &amp;lt;userXY&amp;gt;@example.de weiterleiten. Auch hier ist eine Eintrag in der o.g. &#039;&#039;virtual_maps&#039;&#039; Datei nötig:&lt;br /&gt;
 @example.com	@example.de&lt;br /&gt;
Nun werden alle example.com-Emails zum zuständigen Emailserver für example.de weitergeleitet. Der user part bleibt unverändert.&lt;br /&gt;
&lt;br /&gt;
=== Transports ===&lt;br /&gt;
Transports sind die Art und Weise, wie eine eingehende Mail behandelt wird, z.B. lokal in eine Datei speichern, an einen imap-Server weiterreichen oder ein Script ausführen.&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel: &amp;lt;br&amp;gt;&lt;br /&gt;
Wenn auf eine bestimmte Adresse geschickt wird, soll ein Script ausgeführt werden, das dem Absender einen Zufallsspruch zurücksendet &#039;&#039;&#039;und&#039;&#039;&#039; die Mail gleichzeitig in ein Postfach einliefert.&lt;br /&gt;
Schritte:&lt;br /&gt;
&lt;br /&gt;
1. Alias definieren (virtual_maps):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de           fortune&lt;br /&gt;
&lt;br /&gt;
2. Alias auf einen Transport  mappen (transports):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de     randomphrase:&lt;br /&gt;
&lt;br /&gt;
3. Transport definieren (master.cf):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 randomphrase      unix  - n n - - pipe &lt;br /&gt;
  flags=h user=vmail:vmail argv=/usr/local/bin/randomphrase.pl ${sender}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(Den User vmail muss es natürlich in der passwd geben, z.B. so:&lt;br /&gt;
 vmail:x:4000:4000::/home/vmail:/user/sbin/nologin&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
4. Script hinterlegen:&lt;br /&gt;
 /usr/local/bin/randomphrase.pl&lt;br /&gt;
für alle ausführbar machen&lt;br /&gt;
&lt;br /&gt;
Mit dem Script [[ randomphrase.pl ]] wird ein Zufallsspruch erzeugt. Dafür muss das Paket &#039;&#039;fortune-mod&#039;&#039; installiert sein. &lt;br /&gt;
Zum Weiterschicken der Email wird das Script /usr/local/bin/deliver_mail.sh aufgerufen. ([[File:Deliver_mail.sh]])&lt;br /&gt;
&amp;lt;br&amp;gt; Hierfür muss der User vmail in der Datei &#039;&#039;/etc/sudoers.d/vmail&#039;&#039; berechtigt werden:&lt;br /&gt;
 vmail ALL=(root) NOPASSWD: /usr/local/bin/deliver_mail&lt;br /&gt;
&lt;br /&gt;
Eine Email an die Adresse fortune@example.de erzeugt nun eine Antwort an die Absenderadresse mit einem Zufallsspruch.&lt;br /&gt;
&lt;br /&gt;
== Multidomain ==&lt;br /&gt;
&lt;br /&gt;
Natürlich kann Postfix auch Emails für mehrere Domains annehmen. Dafür gibt es den Parameter „virtual_mailbox_domains“:&lt;br /&gt;
&lt;br /&gt;
 virtual_mailbox_domains = example.de  example.com  ihr-koennt-mich-alle.de&lt;br /&gt;
Die Variable $mydomain sollte dann aus mydestination entfernt werden.&lt;br /&gt;
&lt;br /&gt;
== Special DNS Records ==&lt;br /&gt;
=== SPF (Sender Policy Framework) ===&lt;br /&gt;
Mit einem RR-Type TXT kann man eine Liste von Emailservern definieren, die als Absender die Emaildomain verwenden dürfen. Generiert jemand eine Fakeemail von einem anderen System aus, kann diese  abgewiesen werden.&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 IN TXT  &amp;quot;v=spf1 mx:example.de a:foo.example.de ip4:8.15.47.11/32 ip6:2008:15:5:47::11/48 ip6:2008:15:5:47::12/48 -all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Howto: &amp;lt;br&amp;gt;&lt;br /&gt;
https://dmarcian.com/create-spf-record/ &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.open-spf.org/SPF_Record_Syntax/&lt;br /&gt;
&lt;br /&gt;
SPF in Postfix integrieren:&lt;br /&gt;
&lt;br /&gt;
Nun ist die Domain vor Missbrauch vor Fakeeemails geschützt. Jetzt gibt es aber noch die andere Seite zu beachten. Postfix soll ebenfalls die SPF-Records anderer Emaildomains prüfen und die Email ggf ablehnen.&lt;br /&gt;
https://makeityourway.de/enabling-spf-sender-policy-framework-checking-on-postfix/&lt;br /&gt;
&lt;br /&gt;
Hier in Kürze zusammengefasst, was es zu beachten gibt:&lt;br /&gt;
 # apt install postfix-policyd-spf-python&lt;br /&gt;
Die Config-Datei &#039;&#039;/etc/postfix-policyd-spf-python/policyd-spf.conf&#039;&#039; liefert bereits brauchbare Defaults, optional kann man noch eine Whitelist ergänzen z.B.&lt;br /&gt;
&lt;br /&gt;
 Domain_Whitelist = example.com &lt;br /&gt;
In der master.cf ergänzen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 policyd-spf  unix  -     n       n       -       -       spawn &lt;br /&gt;
  user=policyd-spf argv=/usr/bin/policyd-spf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In der main.cf ergänzen:&lt;br /&gt;
 smtpd_recipient_restrictions =&lt;br /&gt;
 (...)&lt;br /&gt;
 check_policy_service unix:private/policyd-spf&lt;br /&gt;
 (…)&lt;br /&gt;
&#039;&#039;&#039;Achtung:&#039;&#039;&#039; Wenn es schon einen check_policy_service Eintrag gibt, &#039;&#039;&#039;keinesfalls&#039;&#039;&#039; einen weiteren Eintrag anhängen, sondern eine neue Zeile aufmachen!&lt;br /&gt;
 policy-spf_time_limit = 3600s&lt;br /&gt;
&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&lt;br /&gt;
Ein paar Testemails einkippen und mail.log gucken.&lt;br /&gt;
&lt;br /&gt;
=== DMARC (Domain based Message Authentication, Reporting and Conformance) ===&lt;br /&gt;
https://dmarcian.com/dmarc-record/&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 _dmarc                  IN TXT      &amp;quot;v=DMARC1;p=quarantine;rua=mailto:postmaster@example.de&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In dem Fall werden verdächtige Emails in einen Quarantäne-Ordner verschoben und ein Report an den postmaster versandt.&lt;br /&gt;
Für die Integration in Postfix gibt es das Paket opendmarc.&lt;br /&gt;
Implementierung von SPF, DKIM und DMARC in Postfix:&lt;br /&gt;
&lt;br /&gt;
https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/&lt;br /&gt;
(untested)&lt;br /&gt;
&lt;br /&gt;
== Nützliche Commands ==&lt;br /&gt;
Erzeugen eines database files aus einer Textdatei:&lt;br /&gt;
 postmap &amp;lt;filename&amp;gt;&lt;br /&gt;
Alle Configparameter anzeigen:&lt;br /&gt;
  postconf&lt;br /&gt;
Konfigprüfung: &lt;br /&gt;
  postfix check&lt;br /&gt;
Mailqueue anschauen: &lt;br /&gt;
  mailq&lt;br /&gt;
Alle Messages in der Queue ausliefern: &lt;br /&gt;
  postqueue -f&lt;br /&gt;
Nur eine bestimmte Message ausliefern: &lt;br /&gt;
  postqueue -i &amp;lt;ID&amp;gt;&lt;br /&gt;
Message löschen: &lt;br /&gt;
  postsuper -d &amp;lt;ID&amp;gt;&lt;br /&gt;
Alle Messages löschen (!): &lt;br /&gt;
  postsuper -d ALL&lt;br /&gt;
Inhalt einer Message anschauen: &lt;br /&gt;
  postcat -vq &amp;lt;ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Logfile ==&lt;br /&gt;
&lt;br /&gt;
Geloggt wird nach &#039;&#039;/var/log/mail.log&#039;&#039; (alles) bzw. Errors nach &#039;&#039;/var/log/mail.err&#039;&#039; und Warnings nach &#039;&#039;/var/log/mail.warn&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;Protipp: Alias  anlegen:&lt;br /&gt;
 maillog=&#039;tail -f /var/log/mail.log&#039;&lt;br /&gt;
&lt;br /&gt;
== Greylisting und Antispam ==&lt;br /&gt;
&lt;br /&gt;
Zur Bekämpfung der Spamflut gibt es das praktische Programm &#039;&#039;&#039;„Postgrey“&#039;&#039;&#039;. Unter Debian kann dieses als Paket installiert werden. Dieses wird in die main.cf im Abschnitt smtpd_recipient_restrictions eingebunden.&lt;br /&gt;
 smtpd_recipient_restrictions = &lt;br /&gt;
  permit_mynetworks&lt;br /&gt;
  permit_sasl_authenticated&lt;br /&gt;
  permit_tls_clientcerts&lt;br /&gt;
  reject_unauth_destination&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_sender&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_recipient&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_rbl_client bl.spamcop.net&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;check_policy_service inet:127.0.0.1:10023&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
(Die Blacklist &#039;&#039;dnsbl.sorbs.net&#039;&#039; wurde hier außen vor gelassen, da diese so ziemlich alles blockt, z.B. alle yahoo- oder gmx-Adressen.)&lt;br /&gt;
Damit das funktioniert, muss natürlich noch Postgrey selbst an den Start gebracht werden.&lt;br /&gt;
Hierfür wird die Datei &#039;&#039;/etc/default/postgrey&#039;&#039; bearbeitet. Hier ein Beispiel:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;--inet=10023 --auto-whitelist-clients=8  &lt;br /&gt;
 POSTGREY_TEXT=&amp;quot;Busy. Come back in 5 minutes.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Der Service lauscht also auf Port 10023. Im obigen Beispiel wird ein Absender beim 8. erfolgreichen Zustellversuch automatisch gewhitelistet (optionaler Parameter &#039;&#039;--auto-whitelist-clients&#039;&#039;, evtl. Zahl erhöhen oder Parameter ganz weglassen).&lt;br /&gt;
&lt;br /&gt;
Anschließend wechselt man ins Verzeichnis &#039;&#039;/etc/postgrey&#039;&#039;. Dort gibt es 2 Whitelistings. Die Absender stehen in &#039;&#039;&#039;whitelist_clients&#039;&#039;&#039;. Dort stehen bereits IPs und Domains diverser Provider. Man kann dort selbst Einträge hinzufügen (z.B. example.ch).&lt;br /&gt;
&lt;br /&gt;
In der Datei &#039;&#039;&#039;whitelist_recipients&#039;&#039;&#039; kann man alle Empfänger der eigenen Domain eintragen, die auf jeden Fall immer Emails bekommen sollen. z.B. postmaster@, abuse@.&amp;lt;br&amp;gt;&lt;br /&gt;
Beachte: &#039;&#039;&#039;Die Dateien müssen explizit eingesourcet werden&#039;&#039;&#039;, passiert nicht automatisch. Das macht man mit den POSTGREY_OPTS:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;$POSTGREY_OPTS --whitelist-clients=whitelist_clients --whitelist-recipients=/etc/postgrey/whitelist_recipients&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Nach getaner Anpassung, den postgrey-Service (neu)starten.&lt;br /&gt;
  # service postgrey restart&lt;br /&gt;
Überprüfen, ob der Dienst läuft z.B. mit:&lt;br /&gt;
 # lsof -i :10023&lt;br /&gt;
Anschließend Postfix reloaden&lt;br /&gt;
 # postfix reload&lt;br /&gt;
und die Mailbox(en) beobachten, hinsichtlich Spamaufkommen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Quelle: Artikel „Postzusteller“, Admin-Magazin, Ausgabe 03-2013)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Dovecot =&lt;br /&gt;
&lt;br /&gt;
Open Source IMAP-Server zum Einliefern der Emails in Postfächer mittels POP3 oder IMAP bzw. IMAPs. Im folgenden wird nur auf IMAPs eingegangen.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Es empfiehlt sich, den Dovecot auf demselben System zu installieren wie Postfix. Andere Fälle werden hier nicht berücksichtigt.&lt;br /&gt;
&lt;br /&gt;
Installation des imapd mittels&lt;br /&gt;
 # apt install dovecot-imapd&lt;br /&gt;
&lt;br /&gt;
Dies reicht für alle Grundfunktionen der Emailauslieferung. Für erweiterte Optionen wie z.B. Filterfunktion können weiter dovecot-Pakete wie  &#039;&#039;&#039;dovecot-antispam, dovecot-sieve&#039;&#039;&#039; installiert werden.&lt;br /&gt;
&lt;br /&gt;
User (i.d.F. &#039;&#039;vmail&#039;&#039;) als Owner für die Mailboxen anlegen:&lt;br /&gt;
&lt;br /&gt;
 useradd -u 4000 -m -d /home/vmail -s /user/sbin/nologin vmail&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Configdateien in &#039;&#039;/etc/dovecot/conf.d&#039;&#039; anpassen.&lt;br /&gt;
Die Datei &#039;&#039;/etc/dovecot/dovecot.conf&#039;&#039; inkludiert per Default alle Dateien unter conf.d/*.conf.&lt;br /&gt;
&lt;br /&gt;
=== Usermanagement ===&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel, wo User in einer separaten Datei abgelegt werden.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;10-auth.conf:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
disable_plaintext_auth = no&lt;br /&gt;
auth_username_format = %n&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login &lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Wenn kein auth über pam:&lt;br /&gt;
 #!include auth-system.conf.ext&lt;br /&gt;
&lt;br /&gt;
Plaintext Auth kann man erlauben, weil die User-Passwörter als gehashter String übertragen werden. Für die Kommunikation zwischen Postfix und Dovecot spielt das ohnehin keine Rolle, da sich beide Dienste auf einem Server befinden. Der Zugriff von einem MUA aus wird über TLS/SSL erfolgen (s.u.).&lt;br /&gt;
&lt;br /&gt;
master user anlegen (optional):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p supergeheim -s SHA512-CRYPT -u administrator@example.de &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Den Output zusammen mit dem Usernamen in die Datei master-users pasten.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat  ../master-users     &lt;br /&gt;
administrator@example.de:{SHA256-CRYPT}$5$9zrt7/e2CDkPmSuA$SNEkm/L4XZcYFAbYkJp5ESl9u35fVBSd4ukO0dm5yp3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sonstige User anlegen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p strenggeheim -s SHA512-CRYPT -u sunflower@example.de&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
→ /etc/dovecot/users:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sunflower:{SHA256-CRYPT}$5$D3PhhtqUhRXT7cmZ$E5244BpvNafb.9FtbhF9AUfbvw8XpnOJhPyM/q/rRN2:::Sun Flo,,,:/var/mail/example.de/sunflower:/bin/false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Hier sollten keine Abkürzungen wie &#039;&#039;%d&#039;&#039; oder &#039;&#039;%n&#039;&#039; stehen, weil diese nicht (von sieve, s.u.) bzw. nur teilweise (von dovecot) interpretiert werden.&lt;br /&gt;
&lt;br /&gt;
Damit der Account auch Email bekomemn kann, ergänzt man die virtual table im Postfix directory:&lt;br /&gt;
  cat sunflower@example.de  example.de/spambucket &amp;gt;&amp;gt; /etc/postfix/virtual&lt;br /&gt;
Aktivieren mit&lt;br /&gt;
 postmap virtual&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Dateirechte ===&lt;br /&gt;
&lt;br /&gt;
Die Files master-users, users sollten nur von dovecot gelesen werden können!&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chgrp dovecot /etc/dovecot/*users&lt;br /&gt;
# chmod o-r /etc/dovecot/*users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mailbox anlegen und User berechtigen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# maildirmake.dovecot /var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
# chown -R vmail.vmail /var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
User im Postfix anlegen, in den virtual maps, s. o.&lt;br /&gt;
&lt;br /&gt;
Kontrolle:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# doveadm user &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IMAP konfigurieren ===&lt;br /&gt;
Protipp: erstmal conf.d wegsichern:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av /etc/doveconf/conf.d /etc/doveconf/conf.d.orig&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Folgende Konfigurationsdateien in conf.d entsprechend anpassen:&lt;br /&gt;
* &#039;&#039;&#039;10-auth.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth_allow_cleartext = yes&lt;br /&gt;
auth_username_format = %{user|username|lower}&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login&lt;br /&gt;
&lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
!include auth-system.conf.ext&lt;br /&gt;
!include auth-passwdfile.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-mail.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# den parameter mail_location gibt es nicht mehr&lt;br /&gt;
mail_driver = maildir&lt;br /&gt;
mail_home = /var/mail/%{user | domain}/%{user}&lt;br /&gt;
mail_path = %{home}&lt;br /&gt;
namespace inbox { &lt;br /&gt;
 inbox = yes &lt;br /&gt;
} &lt;br /&gt;
mail_uid = 4000 &lt;br /&gt;
mail_gid = 4000&lt;br /&gt;
mail_privileged_group = mail&lt;br /&gt;
protocol !indexer-worker {&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-master.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service imap-login { &lt;br /&gt;
  inet_listener imaps { &lt;br /&gt;
   port = 993 &lt;br /&gt;
   ssl = yes &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service auth { &lt;br /&gt;
 unix_listener auth-userdb { &lt;br /&gt;
   user = vmail  &lt;br /&gt;
   group = vmail &lt;br /&gt;
 } &lt;br /&gt;
 unix_listener /var/spool/postfix/private/auth { &lt;br /&gt;
   mode = 0666 &lt;br /&gt;
   user = postfix &lt;br /&gt;
   group = postfix &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service stats { &lt;br /&gt;
	unix_listener stats-reader { &lt;br /&gt;
               user = vmail &lt;br /&gt;
               group = vmail &lt;br /&gt;
               mode = 0660 &lt;br /&gt;
     } &lt;br /&gt;
&lt;br /&gt;
	unix_listener stats-writer { &lt;br /&gt;
       user = vmail &lt;br /&gt;
       group = vmail &lt;br /&gt;
       mode = 0660 &lt;br /&gt;
   	} &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-ssl.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# (z.B. Postfix certs verwenden)&lt;br /&gt;
ssl = yes&lt;br /&gt;
ssl_server_cert_file = /etc/dovecot/private/dovecot.pem&lt;br /&gt;
ssl_server_key_file = /etc/dovecot/private/dovecot.key&lt;br /&gt;
ssl_min_protocol = TLSv1.2&lt;br /&gt;
&lt;br /&gt;
#ssl_client_ca_dir = /etc/ssl/certs &lt;br /&gt;
#ssl_dh = &amp;lt;/usr/share/dovecot/dh.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Zertifikate generieren: s. https://wiki.nomorebluescreen.de/index.php?title=Webserver_mit_Apache#Alternative_letsencrypt&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
Jedes Mal, wenn das Zertifikat ausgetauscht wird, muss der dovecot-Service neu gestartet werden, damit das neue Zertifikat auch eingelesen wird.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Überprüfen, welche Dateien angefasst wurden:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
diff -quw conf.d.orig conf.d&lt;br /&gt;
Files conf.d.orig/10-ssl.conf and conf.d/10-ssl.conf differ&lt;br /&gt;
Files conf.d.orig/15-lda.conf and conf.d/15-lda.conf differ&lt;br /&gt;
Files conf.d.orig/20-imap.conf and conf.d/20-imap.conf differ&lt;br /&gt;
Files conf.d.orig/20-managesieve.conf and conf.d/20-managesieve.conf differ&lt;br /&gt;
Files conf.d.orig/90-sieve.conf and conf.d/90-sieve.conf differ&lt;br /&gt;
Files conf.d.orig/auth-passwdfile.conf.ext and conf.d/auth-passwdfile.conf.ext differ&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ausgabe der gesamten Config&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
 # doveconf -n&lt;br /&gt;
&lt;br /&gt;
https://doc.dovecot.org/2.4.0/installation/upgrade/2.3-to-2.4.html&lt;br /&gt;
&lt;br /&gt;
==== Sieve ====&lt;br /&gt;
Engine zum Filtern von Emails&lt;br /&gt;
&lt;br /&gt;
dovecot-sieve und dovecot-managesieved  installieren&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;15-lda.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lda_mailbox_autocreate = yes&lt;br /&gt;
lda_mailbox_autosubscribe = yes&lt;br /&gt;
protocol lda { &lt;br /&gt;
 mail_plugins = $mail_plugins sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;20-managesieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
protocols = $protocols sieve&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;90-sieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
plugin { &lt;br /&gt;
 sieve = file:~/sieve;active=~/.dovecot.sieve &lt;br /&gt;
 sieve_default = /var/lib/dovecot/sieve/default.sieve &lt;br /&gt;
 sieve_global_dir = /var/lib/dovecot/sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Kontrolle, ob der sieve-Service läuft und auf Port 4190 lauscht.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# service dovecot restart&lt;br /&gt;
# ss -plnt | grep 4190&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Nun sollte im Mailhome des Users ein Verzeichnis namens &amp;quot;sieve&amp;quot; sichtbar sein.&lt;br /&gt;
&lt;br /&gt;
=== Transport von Postfix zu Dovecot ===&lt;br /&gt;
&lt;br /&gt;
Dem Postfix muss noch beigebracht werden, dass die Emails zum Dovecot gehen.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;master.cf&#039;&#039;&#039; im Postfix anpassen (die Einträge in den {} gehören so, nicht ersetzen!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail    &lt;br /&gt;
 argv=/usr/lib/dovecot/deliver -a ${recipient} -f ${sender} -d $ &lt;br /&gt;
 {user} @${nexthop} -m ${extension}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
und einen mailbox_command Eintrag in der main.cf vornehmen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mailbox_command = /usr/lib/dovecot/dovecot-lda -f &amp;quot;$SENDER&amp;quot; -a &amp;quot;$RECIPIENT&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(https://doc.dovecot.org/configuration_manual/howto/dovecot_lda_postfix/#howto-dovecot-lda-postfix)&lt;br /&gt;
&lt;br /&gt;
Danach noch postfix und dovecot service restarten.&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
Logfiles gehen ebenfalls (wie postfix) nach /var/log/mail.log &amp;lt;br&amp;gt;&lt;br /&gt;
Nützlicher Alias:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
maillog=&#039;tail -f /var/log/maillog&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debugging einschalten:&lt;br /&gt;
 mail_debug = yes&lt;br /&gt;
in der Datei &lt;br /&gt;
&#039;&#039;10-logging.conf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Protipp:&#039;&#039;&#039;&lt;br /&gt;
Wenn im Log folgender Fehler erscheint:&lt;br /&gt;
&lt;br /&gt;
 Mar 27 08:03:56 aphantopus postfix/pipe[2317]: 521066005D: to=&amp;lt;sunflower@example.de&amp;gt;, relay=dovecot, delay=0.3, delays=0.19/0.04/0/0.07, dsn=2.0.0, status=sent (delivered via dovecot service (lda(sunflower@example.de,)Error: net_connect_unix(/var/run/dovecot/stats-writer) failed: Permission denied))&lt;br /&gt;
&lt;br /&gt;
=&amp;gt; In der &#039;&#039;10-master.conf&#039;&#039; &#039;&#039;&#039;stats&#039;&#039;&#039; für User vmail erlauben (s.o.)&lt;br /&gt;
&lt;br /&gt;
= Roundcube =&lt;br /&gt;
&lt;br /&gt;
Praktisches Webfrontend zum Abholen und Verschicken von Emails&lt;br /&gt;
&lt;br /&gt;
Erst mysql-server installieren, sonst bricht die Installation mit einem Fehler ab&lt;br /&gt;
 # apt install mariadb-server roundcube&lt;br /&gt;
Die dbconfig-common Frage mit „yes“ beantworten, mysql-Passwort setzen.&lt;br /&gt;
Config Datei anpassen (&#039;&#039;/etc/roundcube/config.inc.php&#039;&#039;):&lt;br /&gt;
 $config[&#039;smtp_server&#039;] = &#039;localhost&#039;;&lt;br /&gt;
 $config[&#039;smtp_port&#039;] = 25;&lt;br /&gt;
&lt;br /&gt;
== Plugins ==&lt;br /&gt;
&lt;br /&gt;
Standard-Plugings installieren&lt;br /&gt;
 # apt install roundcube-plugins&lt;br /&gt;
&lt;br /&gt;
Weitere Plugins installieren:&lt;br /&gt;
 # apt install roundcube-plugins-extra git curl composer&lt;br /&gt;
(composer braucht man für die Installation von Plugins, git, weil die meisten aus github kommen)&amp;lt;br&amp;gt;&lt;br /&gt;
Die, die man haben will, in der Datei &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; enablen&lt;br /&gt;
&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
        &#039;compose-addressbook&#039;, &lt;br /&gt;
        &#039;markasjunk2&#039;, &lt;br /&gt;
        &#039;fail2ban&#039; &lt;br /&gt;
 );&lt;br /&gt;
&lt;br /&gt;
Übersicht über die offiziellen Plugins:&lt;br /&gt;
&lt;br /&gt;
https://plugins.roundcube.net/&lt;br /&gt;
&lt;br /&gt;
Plugins, die es nicht als Paket gibt:&amp;lt;br&amp;gt;&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
* README lesen&lt;br /&gt;
* Plugin als zip herunterladen, nach &#039;&#039;/usr/share/roundcube/plugins&#039;&#039; entpacken&lt;br /&gt;
* (evtl. umbenennen)&lt;br /&gt;
* &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; bearbeiten:&lt;br /&gt;
Abschnitt&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
suchen und fehlendes Plugin ergänzen&lt;br /&gt;
&lt;br /&gt;
== Filter Plugin for Sieve ==&lt;br /&gt;
&lt;br /&gt;
Achtung, nicht das Plugin „filter“ verwenden, sondern &#039;&#039;&#039;managesieve&#039;&#039;&#039; (ist Bestandteil des roundcube-plugins Paketes)&lt;br /&gt;
&lt;br /&gt;
Eine Anleitung gibt es hier:&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-code-in-roundcube/ &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-in-roundcube/&lt;br /&gt;
&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Den protocols Parameter nicht in der dovecot.conf editieren, sondern in &lt;br /&gt;
&#039;&#039;20-managesieve.conf&#039;&#039; (s.o.):&lt;br /&gt;
&lt;br /&gt;
 protocols = $protocols sieve&lt;br /&gt;
&lt;br /&gt;
Nun kann man über das Webfrontend Sieve-Filterregeln generieren&lt;br /&gt;
&lt;br /&gt;
Achtung Bug:&amp;lt;br&amp;gt;&lt;br /&gt;
Sieve legt ein sieve-Verzeichnis unter dem Verzeichnis an, das in mail_location definiert ist. Wenn man die emails der User unter &#039;&#039;/var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&#039;&#039; ablegen möchte, wird man folgendes konfigurieren:&lt;br /&gt;
&lt;br /&gt;
 mail_location = maildir:/var/mail/%d/%n&lt;br /&gt;
&lt;br /&gt;
Da dovecot aber %d nicht interpretiert (s.o.), liegt das User maildirectory unter /var/mail/&amp;lt;username&amp;gt;. Sieve interpretiert dagegen %n nicht und legt ein Directory /var/mail/&amp;lt;domain&amp;gt;/%n/sieve an, unter der die roundcube.sieve Datei liegt. Somit greifen alle User auf dieselbe Datei zu, was technisch möglich, securitytechnisch aber fatal ist. Leider keine gute Idee zur Abhilfe bekannt.&lt;br /&gt;
&lt;br /&gt;
== Passwort ändern ==&lt;br /&gt;
Um den Usern die Möglichkeit zu geben, ihr Passwort selbst zu ändern, wird in der &#039;&#039;config.inc.php&#039;&#039; das Plugin enabled:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$config[&#039;plugins&#039;] = array(&lt;br /&gt;
 (...)&lt;br /&gt;
 &#039;password&#039;&lt;br /&gt;
);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Weitere Einstellungen, wenn die User in einem Passwortfile gepflegt werden wie im Kapitel &#039;&#039;&#039;Dovecot&#039;&#039;&#039; beschrieben:&amp;lt;br&amp;gt;&lt;br /&gt;
(wir gehen davon aus, dass die Userpasswörter mit sha512 verschlüsselt werden, s.o.)&lt;br /&gt;
&lt;br /&gt;
 # https://stackoverflow.com/questions/62655236/how-to-enable-password-plugin-on-roundcube&lt;br /&gt;
 $config[&#039;password_algorithm&#039;] = &#039;ssha512&#039;;&lt;br /&gt;
 $config[&#039;password_algorithm_prefix&#039;] = &#039;{SSHA512}&#039;;&lt;br /&gt;
 $config[&#039;password_driver&#039;] = &#039;dovecot_passwdfile&#039;;&lt;br /&gt;
 $config[&#039;password_dovecot_passwdfile_path&#039;] = &#039;/etc/dovecot/users&#039;;&lt;br /&gt;
&lt;br /&gt;
Die users Datei vom dovecot muss dann entsprechend für www-data les- und schreibbar sein:&lt;br /&gt;
 -rw-rw---- 1 dovecot www-data 1240 Dec  2 23:20 /etc/dovecot/users&lt;br /&gt;
&lt;br /&gt;
(Achtung, riskant bei eventueller Kompromittierung des Webservers! Als Alternative überlegen, die dovecot-Passwörter in eine [mysql-]DB auszulagern)&lt;br /&gt;
&lt;br /&gt;
== Identities ändern ==&lt;br /&gt;
&lt;br /&gt;
Normalerweise kann ein User nur mit seiner Absenderadresse senden. Das ist eine sinnvolle Einstellung, aber wer das Feature zu Testzwecken abschalten will, kann folgende Einstellung vornehmen:&lt;br /&gt;
 $config[&#039;identities_level&#039;] = 0;&lt;br /&gt;
Nun kann der User über &amp;quot;Einstellungen&amp;quot; weitere Absender hinzufügen (https://www.servercake.blog/multiple-identities-roundcube/)&lt;br /&gt;
&lt;br /&gt;
(Leider bisher keine Möglichkeit gefunden, dies nur auf (einen) bestimmte(n) User einzuschränken)&lt;br /&gt;
&lt;br /&gt;
== Apache Integration ==&lt;br /&gt;
&lt;br /&gt;
Hier eine Beispielkonfiguration für einen Virtual Host, um die Roundcube-Seite unter https://mail.example.de zu erreichen.&lt;br /&gt;
Weiteres im Kapitel [[Webserver mit Apache|apache]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;VirtualHost *:443&amp;gt; &lt;br /&gt;
    ServerName mail.example.de &lt;br /&gt;
    ServerAdmin postmaster@example.de &lt;br /&gt;
&lt;br /&gt;
    SSLEngine on &lt;br /&gt;
    SSLCertificateFile         /var/lib/dehydrated/certs/mail.example.de/fullchain.pem &lt;br /&gt;
    SSLCertificateKeyFile /var/lib/dehydrated/certs/mail.example.de/privkey.pem &lt;br /&gt;
&lt;br /&gt;
    DocumentRoot /usr/share/roundcube &lt;br /&gt;
&lt;br /&gt;
    # Includes &lt;br /&gt;
    Include /etc/apache2/conf-available/ssl-encryption.conf &lt;br /&gt;
      &lt;br /&gt;
    LogFormat &amp;quot;%h %l %u %t \&amp;quot;%r\&amp;quot; %&amp;gt;s %b \&amp;quot;%{Referer}i\&amp;quot; \&amp;quot;%{User-agent}i\&amp;quot;&amp;quot; vhost_combined &lt;br /&gt;
    CustomLog /var/log/apache2/mail-ssl.log combined &lt;br /&gt;
    ErrorLog /var/log/apache2/mail-ssl-error.log &lt;br /&gt;
&amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Das roundcube-Paket bringt zudem noch eine roundcube.conf mit, die unter /etc/apache2/conf-available/roundcube.conf installiert und aktiviert wird.&lt;br /&gt;
&lt;br /&gt;
=== PHP ===&lt;br /&gt;
Damit der Roundcube überhaupt läuft, muss das php Plugin installiert und aktiviert sein. Passiert unter Debian mittels:&lt;br /&gt;
 # apt install libapache2-mod-php&lt;br /&gt;
&lt;br /&gt;
Etwas performanter ist die Verwendung von &#039;&#039;&#039;php-fpm&#039;&#039;&#039; (https://www.zend.com/blog/apache-phpfpm-modphp).&lt;br /&gt;
 # apt install php-fpm&lt;br /&gt;
&lt;br /&gt;
Der default Upload bei PHP sind dürftige 2 MB. Um diesen z.B. auf 50MB raufzudrehen, muss folgende Datei angefasst werden:&lt;br /&gt;
&amp;lt;pre&amp;gt;VERSION=$(php -v| head -n 1 | awk &#039;{ print $2 }&#039; | sed -e &#039;s|.[[:digit:]]*$||&#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
* modphp:&lt;br /&gt;
 /etc/php/${VERSION}/apache2/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
* php-fpm:&lt;br /&gt;
 /etc/php/${VERSION}/fpm/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
&lt;br /&gt;
= Integration in einen MUA =&lt;br /&gt;
Wer nicht über den (langsamen) Webmailer gehen will, kann natürlich auch einen MUA seiner Wahl verwenden. Hier ein Beispiel.&lt;br /&gt;
&lt;br /&gt;
== Thunderbird==&lt;br /&gt;
&lt;br /&gt;
Einstellungen für Outgoing Server (SMTP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Servername: FQDN des Email-Servers&lt;br /&gt;
Port: 25&lt;br /&gt;
Connection Security: STARTTLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Beim 1. Mal wird man nach seinem Mailbox-Passwort gefragt. Dieses eingeben und speichern.&lt;br /&gt;
&lt;br /&gt;
Server Settings (IMAP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Server Type: IMAP Mail Server&lt;br /&gt;
Server Name: FQDN des Email-Servers&lt;br /&gt;
Port: 993&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
Connection Security: SSL/TLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
(Die restlichen Defaults so belassen oder bei Bedarf anpassen)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot thunderbird1.png|900px]]&lt;br /&gt;
[[File:Screenshot thunderbird2.png|900px]]&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=673</id>
		<title>Emailserver mit Postfix und Dovecot</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=673"/>
		<updated>2026-03-18T22:46:52Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Sieve */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Postfix =&lt;br /&gt;
&lt;br /&gt;
Postfix ist ein MTA (Mail Transfer Agent), der eine gute Alternative zu anderen gängigen  MTAs (Sendmail, Exim) darstellt, da seine Konfiguration gut lesbar ist. &lt;br /&gt;
In unserem Beispiel soll der MTA mit einem IMAPd (Dovecot) verknüpft werden, so dass  Benutzer eine Mailbox direkt auf dem System haben. Das Abholen der Mails erfolgt per IMAPs.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Zunächst muss das Paket „postfix“ installiert werden.&lt;br /&gt;
Dabei sind noch ein paar Fragen zu beantworten:&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Art des Servers:	&#039;&#039;&#039;Internet Site&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Root and postmaster mail recipient:&#039;&#039;&#039; ein Postfach eintragen, z.B.postmaster@example.de &amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Other destinations to accept mail for (blank for none):&#039;&#039;&#039; z.B. mail.example.de, localhost, $mydomain (kann man erstmal die defaults belassen)&lt;br /&gt;
* &#039;&#039;&#039;Force synchronous updates on mail queue:&#039;&#039;&#039; no&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Local subnets:&#039;&#039;&#039; 127.0.0.1/8, 192.168.63.0/24 (hier das eigene Netz ergänzen)&lt;br /&gt;
* &#039;&#039;&#039;Mailbox size limit:&#039;&#039;&#039; 0 (unbegrenzt)&lt;br /&gt;
* &#039;&#039;&#039;Local address extension character:&#039;&#039;&#039;  + (i.a. als default ausreichend)&lt;br /&gt;
* &#039;&#039;&#039;Internet protocols to use:&#039;&#039;&#039;  all (wenn man nicht explizit ipv4 oder ipv6 sprechen will)&lt;br /&gt;
&lt;br /&gt;
Diese Einstellungen lassen sich jederzeit mit&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # dpkg-reconfigure postfix&amp;lt;/console&amp;gt;&lt;br /&gt;
ändern.&lt;br /&gt;
&lt;br /&gt;
Alle relevanten Dateien befinden sich  im Verzeichnis &#039;&#039;/etc/postfix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Bevor wir zur Postfixconfig kommen, überprüfen wir den Inhalt der Datei /etc/mailname:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 $ cat /etc/mailname&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Dort darf &#039;&#039;&#039;nur der Domainname&#039;&#039;&#039; stehen, nicht der Hostname (e.g. example.com). Andernfalls kann das Auswirkungen auf den Emailversand haben, v.a. wenn in der main.cf (s.u.) auf die Datei referenziert wird.&lt;br /&gt;
&lt;br /&gt;
Die wichtigeste Datei zum Anpassen ist zunächst die &#039;&#039;&#039;main.cf&#039;&#039;&#039;. Hier ein Beispiel für den Server „mx“ in der Domain example.de. Folgende Parameter sollten konfiguriert sein (exemplarisch):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 myhostname&lt;br /&gt;
 mydomain&lt;br /&gt;
 myorigin&lt;br /&gt;
 mydestination&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Meistens gibt es schon ein paar brauchbare defaults. Der Parameter &#039;&#039;mynetworks&#039;&#039; erlaubt es bestimmten Netzen, Emails ohne weitere Einschränkungen einzuliefern.&lt;br /&gt;
&lt;br /&gt;
Beispielconfig:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
myhostname = mx01.example.de &lt;br /&gt;
mydomain = example.de &lt;br /&gt;
myorigin = $mydomain &lt;br /&gt;
mydestination = $myhostname, localhost, localhost.$mydomain&lt;br /&gt;
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.99.0/24 [2001:780:11b::/48] 214.94.24.154 [2004:780:8:0:5ff0:c5ff:fe09:98f9]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vor allem der Parameter &#039;&#039;&#039;mynetworks&#039;&#039;&#039; sollte mit Bedacht gewählt werden, denn dies sind alle Hosts und Netze, von denen jegliche Emails angenommen werden (auch von &#039;&#039;und&#039;&#039; nach Fremddomains). Fehlkonfigurationen führen hier schnell zum [https://practical365.com/what-is-an-open-relay OpenRelay].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Beachte:&#039;&#039;&#039; ipv6-Adressen müssen in [eckigen Klammern] geschrieben werden, sonst werden keine Emails ausgeliefert!&lt;br /&gt;
Fehler im Logfile:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
postfix/smtpd[21928]: warning: smtpd_client_event_limit_exceptions: 2a01:40f8:c013:5050::0/64: table lookup problem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Nach jeder Änderung ist der Dienst zu reloaden mit dem Befehl&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ob der Restart ordentlich funktioniert hat, kann man z.B. anhand des Logs überprüfen:&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # tail /var/log/mail.log&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== master.cf ===&lt;br /&gt;
Das Kernstück des Postfixdaemons. Hier werden die Transports festgelegt&lt;br /&gt;
Bedeutung der Spalten:&lt;br /&gt;
* service-Feld: Name des Dienstes (smtp, local, procmail, ...) (str)&lt;br /&gt;
* typ-Feld: Verbindungstyp (inet, fifo, unix) (str)&lt;br /&gt;
* Zugriffsrecht: Zugriff auch für externe Programme (default: y) (bool)&lt;br /&gt;
* unpriv-Feld:  Start als unprivilegierter Benutzer (default: y) oder root (n) (bool)&lt;br /&gt;
* chroot: Soll der Dienst in einer chroot-Umgebung gestartet werden (default: y) (bool)&lt;br /&gt;
* wakeup-Feld: Sekunden zwischen 2 Aufrufen (default: 0) (int)&lt;br /&gt;
* Prozessmaximum: Wie viele Prozesse maximal gleichzeitig (default: 50) (int)&lt;br /&gt;
Danach erfolgt ein Kommando mit Flags und Parametern (optional).&lt;br /&gt;
&lt;br /&gt;
== TLS ==&lt;br /&gt;
Optional kann man mit Zertifikaten verschlüsselte Übertragung von Emails konfigurieren. das funktioniert aber nur dann, wenn der Mailserver der Gegenstelle das Zertifikat auch einbindet. Man kann das Zertifikat auch in einen Mailclient einbinden (s. später).&lt;br /&gt;
Die Zertifikatserzeugung kann mit mit [https://letsencrypt.org letsencrypt] erfolgen. Clients zur Zertifikatserzeugung sind [[Webserver_mit_Apache#Alternative_letsencrypt | certbot ]] oder [[Webserver_mit_Apache#Dehydrated | dehydrated]].&lt;br /&gt;
&lt;br /&gt;
=== Zertifikatsgenerierung in Kürze ===&lt;br /&gt;
 # echo $HOSTNAME &amp;gt; /etc/dehydrated/domains.txt&lt;br /&gt;
 # dehydrated –register –accept-terms&lt;br /&gt;
 # dehydrated -c&lt;br /&gt;
&lt;br /&gt;
Dies setzt allerdings einen [[Webserver_mit_Apache | Webserver]] voraus, der auf Port 80 lauscht. Gibt es diesen nicht, kann mal alternativ letsencrypt via DNS verwenden (https://letsencrypt.org/docs/challenge-types).&lt;br /&gt;
&lt;br /&gt;
=== Alternative eigene CA (nicht empfohlen) ===&lt;br /&gt;
&lt;br /&gt;
Wer unbedingt eine eigene CA betreiben will, kann das mit folgender Anleitung tun. Achtung: Das Vorgehen sollte nur gewählt werden, wenn ein zwingender Grund dafür besteht. Viele Browser und MUAs haben Probleme damit, erzeugen hässliche Warnings oder lassen die Seite nicht zu.&lt;br /&gt;
&lt;br /&gt;
==== CA erstellen ====&lt;br /&gt;
Wenn noch kein Zertifikat vorhanden ist, kann man sich selbst eines erstellen oder einen CSR (Certificate Signing Request) erstellen und diesen an eine offizielle CA schicken. Soll ein kommerziell genutzter Mailserver entstehen, ist dies der realistische Weg.&lt;br /&gt;
&lt;br /&gt;
Achtung: Dieser Schritt wird nicht gebraucht, wenn es schon eine CA gibt.&lt;br /&gt;
&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
Key erstellen (+Passwort dafür vergeben), Zertifkatsrequest für die CA erstellen, CA erstellen&lt;br /&gt;
&lt;br /&gt;
 $ openssl genrsa -out ca.key -des3 4096&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -x509 -days 3650 -key ca.key -out ca.crt&lt;br /&gt;
&lt;br /&gt;
 Enter pass phrase for ca.key: &lt;br /&gt;
 You are about to be asked to enter information that will be incorporated &lt;br /&gt;
 into your certificate request. &lt;br /&gt;
 What you are about to enter is what is called a Distinguished Name or a DN. &lt;br /&gt;
 There are quite a few fields but you can leave some blank &lt;br /&gt;
 For some fields there will be a default value, &lt;br /&gt;
 If you enter &#039;.&#039;, the field will be left blank. &lt;br /&gt;
 ----- &lt;br /&gt;
 Country Name (2 letter code) [AU]:DE &lt;br /&gt;
 State or Province Name (full name) [Some-State]:Bavaria &lt;br /&gt;
 Locality Name (eg, city) []:Nuernberg &lt;br /&gt;
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:example.de &lt;br /&gt;
 Organizational Unit Name (eg, section) []:Hostmaster &lt;br /&gt;
 Common Name (e.g. server FQDN or YOUR name) []:*.example.de &lt;br /&gt;
 Email Address []:postmaster@example.de&lt;br /&gt;
&lt;br /&gt;
==== Zertifikat mit der neuen CA erstellen ====&lt;br /&gt;
Schritte:&lt;br /&gt;
    • Key erstellen&lt;br /&gt;
    • Request erstellen &lt;br /&gt;
    • Zertifikat erstellen und signen&lt;br /&gt;
&lt;br /&gt;
  $ openssl genrsa -out mx.example.de.key 4096&lt;br /&gt;
(kein Passwort festlegen)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -key mx.example.de.key -out mx.example.de.csr&lt;br /&gt;
(wieder das Formular ausfüllen as usual)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl x509 -req -days 365 -in mx.example.de.csr -CA  ca.crt -CAkey ca.key -set_serial 01 -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
Beim Erneuern des Zertifikats fallen die Schritte „Erstellen der CA“ und Erzeugen des Keys weg. Ferner muss man auch keine Serial mehr angeben.&lt;br /&gt;
Der Renew-Befehl lautet also folgendermaßen:&lt;br /&gt;
 $ openssl x509 -req -days 730 -in mx.example.de.csr -CA ca.crt -CAkey ca.key -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
=== Einbinden in die Config-Datei ===&lt;br /&gt;
Dieser Schritt gilt wieder für alle Zertifikate, egal wie sie erzeugt wurden. Die Pfade müssen natürlich entsprechend angepasst werden,&lt;br /&gt;
&lt;br /&gt;
Zertifikate an die entsprechende Stelle kopieren und in der Konfig einbinden:&lt;br /&gt;
&lt;br /&gt;
 smtpd_tls_cert_file=/etc/ssl/certs/mx.example.de.crt &lt;br /&gt;
 smtpd_tls_key_file=/etc/ssl/private/mx.example.de.key &lt;br /&gt;
 smtpd_tls_CAfile=/etc/postfix/ca.crt &lt;br /&gt;
 smtpd_use_tls=yes &lt;br /&gt;
 smtpd_tls_session_cache_database = btree: &lt;br /&gt;
   ${data_directory}/smtpd_cache&lt;br /&gt;
 smtp_tls_session_cache_database = btree:  &lt;br /&gt;
   ${data_directory}/smtp_scache&lt;br /&gt;
&lt;br /&gt;
Der Parameter smtpd_tls_Cafile entfällt, wenn eine offizielle CA vorhanden ist (z.B. letsencrypt).&amp;lt;br&amp;gt;&lt;br /&gt;
Die Pfade zu den Zertifikaten können abweichen, bei letsencrypt liegen diese z.B. unter &#039;&#039;/var/lib/dehydrated/certs/&#039;&#039;.&lt;br /&gt;
Überprüfung:&lt;br /&gt;
 $ openssl s_client -connect mx.example.de:25 -starttls smtp&lt;br /&gt;
oder https://www.checktls.com/&lt;br /&gt;
&lt;br /&gt;
Ergänzung:&lt;br /&gt;
Manche MTAs wollen ein Zertifikat in pfx-Form. Dieses kann man aus dem cert file wie folgt erzeugen:&lt;br /&gt;
 $ openssl pkcs12 -export -out mx.example.de.pfx -inkey mx.example.de.key -in mx.example.de.crt&lt;br /&gt;
pfx-File und Passwort dem User zukommen lassen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
[[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]] „vergisst“ manchmal das neue Zertifikat und behält die alte Version, d.h. die meisten Mailclients spoolen dann keine neuen Emails mehr. In diesem Fall den Dovecot Service neu starten.&lt;br /&gt;
&lt;br /&gt;
== SASL ==&lt;br /&gt;
&lt;br /&gt;
Zur Vermeidung eines Open Relays ist dringend anzuraten, per default nur das Einliefern mit dem Absender @example.de von bestimmten Netzen zu erlauben. Dieses passiert mit dem Parameter &#039;&#039;mynetworks&#039;&#039; (s.o.).&lt;br /&gt;
Nun kann es natürlich passieren, dass Benutzer von einem Mailclient irgendwo im Internet Mails verschicken wollen. Diese wären laut Konfig nicht berechtigt. Da die meisten PCs mit dynamischen Adressen im Internet unterwegs sind, macht es hier auch keinen Sinn, die jeweilige IP-Adresse in der Konfig zu ergänzen.&amp;lt;br&amp;gt;&lt;br /&gt;
Das Problem kann umgangen werden, indem Emails versenden dann erlaubt wird, wenn sich der Benutzer einmal erfolgreich am IMAP-Server authentifiziert hat.&lt;br /&gt;
Hierfür gibt es SASL. Die entsprechenden Eintragungen in der main.cf sind:&lt;br /&gt;
 smtpd_relay_restrictions = permit_mynetworks  &lt;br /&gt;
 permit_sasl_authenticated defer_unauth_destination&lt;br /&gt;
 smtpd_sasl_auth_enable = yes &lt;br /&gt;
 smtpd_sasl_type = dovecot &lt;br /&gt;
 smtpd_sasl_local_domain = $mydomain &lt;br /&gt;
 smtpd_sasl_security_options = noanonymous &lt;br /&gt;
 smtpd_sasl_path = private/auth&lt;br /&gt;
&lt;br /&gt;
Im Mailclient äußert sich das Verhalten so, dass man beim ersten Senden einer Nachricht sein Mailbox-Passwort angeben muss.&lt;br /&gt;
Bevor dieses Feature aktiviert wird, muss es einen IMAP-Server geben (s.  [[#Dovecot|nächstes Kapitel]] ).&lt;br /&gt;
&lt;br /&gt;
== Maps ==&lt;br /&gt;
&lt;br /&gt;
Um besser unterscheiden zu können, was mit welchen Absender-/Zieladressen passiert, wird die Konfiguration in sogenannte „Maps“ aufgeteilt. Diese können als Klartext-File oder als Berkley DB vorliegen. In letzterem Fall müssen diese mit dem Kommando &#039;&#039;&#039;postmap&#039;&#039;&#039; nach jeder Bearbeitung umgewandelt werden.&amp;lt;br&amp;gt;&lt;br /&gt;
Ausnahme: Die Datei /etc/aliases.db (nur relevant für lokale Emailauslieferung) wird mit dem Kommando &#039;&#039;&#039;postalias&#039;&#039;&#039; oder &#039;&#039;&#039;newaliases&#039;&#039;&#039; generiert.&lt;br /&gt;
 alias_maps = hash:/etc/aliases &lt;br /&gt;
 alias_database = hash:/etc/aliases &lt;br /&gt;
Hier werden aliase eingerichtet, die auf eine andere Mailbox mappen. Beispiel:&lt;br /&gt;
 postmaster:    root&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
 smtpd_sender_restrictions = hash:/etc/postfix/access &lt;br /&gt;
Hier können für Aktionen für spezielle Absenderadressen eingerichtet werden. Beispiel:&lt;br /&gt;
 example.com     DISCARD&lt;br /&gt;
&lt;br /&gt;
=== Relocated ===&lt;br /&gt;
 relocated_maps = hash:/etc/postfix/relocated &lt;br /&gt;
&lt;br /&gt;
Abweisen der Mail mit einem Hinweis. Beispiel:&lt;br /&gt;
 testy.test      &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Ergebnis:&lt;br /&gt;
 &amp;lt;testy.test@example.de&amp;gt;: Recipient address rejected: User has &lt;br /&gt;
 moved to &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de&lt;br /&gt;
 senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Canonical ===&lt;br /&gt;
==== Sender ====&lt;br /&gt;
 sender_canonical_maps = hash:/etc/postfix/sender_canonical &lt;br /&gt;
&lt;br /&gt;
Bestimmte Adressen werden auf ein übliches Standardformat umgeschrieben:&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   petra.sonne@example.de&lt;br /&gt;
 phun@work.de		   peter.hun@example.de&lt;br /&gt;
&lt;br /&gt;
==== Recipient ====&lt;br /&gt;
 recipient_canonical_maps = hash:/etc/postfix/recipient_canonical &lt;br /&gt;
Arbeitet genauso wie sender_canonical, nur für Empfängeradressen.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Mailbox ===&lt;br /&gt;
 virtual_mailbox_maps = hash:/etc/postfix/virtual&lt;br /&gt;
Locations der Mailboxen des imap-Servers (näheres unter [[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]])&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   	example.de/sunflower/ &lt;br /&gt;
 testy@example.de		example.de/testy &lt;br /&gt;
 test@example.de		example.de/test &lt;br /&gt;
 lmaa@ihr-koennt-mich-alle.de	ihr-koennt-mich-alle.de/lmaa&lt;br /&gt;
&lt;br /&gt;
=== Virtual Aliases ===&lt;br /&gt;
virtual_alias_maps = hash:/etc/postfix/virtual_maps&lt;br /&gt;
&lt;br /&gt;
Adressen die auf andere Adressen umgebogen werden (ähnlich wie die aliases), kann auch domainübergreifend passieren.&lt;br /&gt;
So können mehrere Empfängeradressen in dieselbe Mailbox laufen.&lt;br /&gt;
&lt;br /&gt;
 anrufbeantworter@example.de      sunflower@example.de,H.Hirsch@gmx.de,harry1999@yahoo.de&lt;br /&gt;
 info@example.de        sunflower@example.de&lt;br /&gt;
 postmaster    root &lt;br /&gt;
 webmaster     root &lt;br /&gt;
 administrator root&lt;br /&gt;
 root    sunflower&lt;br /&gt;
fortune:  fortune&lt;br /&gt;
Letzteres ist eine Pipe. Dazu später mehr.&lt;br /&gt;
&lt;br /&gt;
Umwandeln von Text in DB-File und in Postfix einlesen:&lt;br /&gt;
 postmap &amp;lt;aliases|access|canonical|...&amp;gt;&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Einfaches Beispiel: Emails von einer Domain auf eine andere weiterleiten ===&lt;br /&gt;
Nehmen wir an, wir haben einen Emailserver1 in der Domain example.com. Dieser soll alle Email die an &amp;lt;userXY&amp;gt;@example.com eintreffen, an &amp;lt;userXY&amp;gt;@example.de weiterleiten. Auch hier ist eine Eintrag in der o.g. &#039;&#039;virtual_maps&#039;&#039; Datei nötig:&lt;br /&gt;
 @example.com	@example.de&lt;br /&gt;
Nun werden alle example.com-Emails zum zuständigen Emailserver für example.de weitergeleitet. Der user part bleibt unverändert.&lt;br /&gt;
&lt;br /&gt;
=== Transports ===&lt;br /&gt;
Transports sind die Art und Weise, wie eine eingehende Mail behandelt wird, z.B. lokal in eine Datei speichern, an einen imap-Server weiterreichen oder ein Script ausführen.&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel: &amp;lt;br&amp;gt;&lt;br /&gt;
Wenn auf eine bestimmte Adresse geschickt wird, soll ein Script ausgeführt werden, das dem Absender einen Zufallsspruch zurücksendet &#039;&#039;&#039;und&#039;&#039;&#039; die Mail gleichzeitig in ein Postfach einliefert.&lt;br /&gt;
Schritte:&lt;br /&gt;
&lt;br /&gt;
1. Alias definieren (virtual_maps):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de           fortune&lt;br /&gt;
&lt;br /&gt;
2. Alias auf einen Transport  mappen (transports):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de     randomphrase:&lt;br /&gt;
&lt;br /&gt;
3. Transport definieren (master.cf):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 randomphrase      unix  - n n - - pipe &lt;br /&gt;
  flags=h user=vmail:vmail argv=/usr/local/bin/randomphrase.pl ${sender}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(Den User vmail muss es natürlich in der passwd geben, z.B. so:&lt;br /&gt;
 vmail:x:4000:4000::/home/vmail:/user/sbin/nologin&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
4. Script hinterlegen:&lt;br /&gt;
 /usr/local/bin/randomphrase.pl&lt;br /&gt;
für alle ausführbar machen&lt;br /&gt;
&lt;br /&gt;
Mit dem Script [[ randomphrase.pl ]] wird ein Zufallsspruch erzeugt. Dafür muss das Paket &#039;&#039;fortune-mod&#039;&#039; installiert sein. &lt;br /&gt;
Zum Weiterschicken der Email wird das Script /usr/local/bin/deliver_mail.sh aufgerufen. ([[File:Deliver_mail.sh]])&lt;br /&gt;
&amp;lt;br&amp;gt; Hierfür muss der User vmail in der Datei &#039;&#039;/etc/sudoers.d/vmail&#039;&#039; berechtigt werden:&lt;br /&gt;
 vmail ALL=(root) NOPASSWD: /usr/local/bin/deliver_mail&lt;br /&gt;
&lt;br /&gt;
Eine Email an die Adresse fortune@example.de erzeugt nun eine Antwort an die Absenderadresse mit einem Zufallsspruch.&lt;br /&gt;
&lt;br /&gt;
== Multidomain ==&lt;br /&gt;
&lt;br /&gt;
Natürlich kann Postfix auch Emails für mehrere Domains annehmen. Dafür gibt es den Parameter „virtual_mailbox_domains“:&lt;br /&gt;
&lt;br /&gt;
 virtual_mailbox_domains = example.de  example.com  ihr-koennt-mich-alle.de&lt;br /&gt;
Die Variable $mydomain sollte dann aus mydestination entfernt werden.&lt;br /&gt;
&lt;br /&gt;
== Special DNS Records ==&lt;br /&gt;
=== SPF (Sender Policy Framework) ===&lt;br /&gt;
Mit einem RR-Type TXT kann man eine Liste von Emailservern definieren, die als Absender die Emaildomain verwenden dürfen. Generiert jemand eine Fakeemail von einem anderen System aus, kann diese  abgewiesen werden.&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 IN TXT  &amp;quot;v=spf1 mx:example.de a:foo.example.de ip4:8.15.47.11/32 ip6:2008:15:5:47::11/48 ip6:2008:15:5:47::12/48 -all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Howto: &amp;lt;br&amp;gt;&lt;br /&gt;
https://dmarcian.com/create-spf-record/ &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.open-spf.org/SPF_Record_Syntax/&lt;br /&gt;
&lt;br /&gt;
SPF in Postfix integrieren:&lt;br /&gt;
&lt;br /&gt;
Nun ist die Domain vor Missbrauch vor Fakeeemails geschützt. Jetzt gibt es aber noch die andere Seite zu beachten. Postfix soll ebenfalls die SPF-Records anderer Emaildomains prüfen und die Email ggf ablehnen.&lt;br /&gt;
https://makeityourway.de/enabling-spf-sender-policy-framework-checking-on-postfix/&lt;br /&gt;
&lt;br /&gt;
Hier in Kürze zusammengefasst, was es zu beachten gibt:&lt;br /&gt;
 # apt install postfix-policyd-spf-python&lt;br /&gt;
Die Config-Datei &#039;&#039;/etc/postfix-policyd-spf-python/policyd-spf.conf&#039;&#039; liefert bereits brauchbare Defaults, optional kann man noch eine Whitelist ergänzen z.B.&lt;br /&gt;
&lt;br /&gt;
 Domain_Whitelist = example.com &lt;br /&gt;
In der master.cf ergänzen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 policyd-spf  unix  -     n       n       -       -       spawn &lt;br /&gt;
  user=policyd-spf argv=/usr/bin/policyd-spf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In der main.cf ergänzen:&lt;br /&gt;
 smtpd_recipient_restrictions =&lt;br /&gt;
 (...)&lt;br /&gt;
 check_policy_service unix:private/policyd-spf&lt;br /&gt;
 (…)&lt;br /&gt;
&#039;&#039;&#039;Achtung:&#039;&#039;&#039; Wenn es schon einen check_policy_service Eintrag gibt, &#039;&#039;&#039;keinesfalls&#039;&#039;&#039; einen weiteren Eintrag anhängen, sondern eine neue Zeile aufmachen!&lt;br /&gt;
 policy-spf_time_limit = 3600s&lt;br /&gt;
&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&lt;br /&gt;
Ein paar Testemails einkippen und mail.log gucken.&lt;br /&gt;
&lt;br /&gt;
=== DMARC (Domain based Message Authentication, Reporting and Conformance) ===&lt;br /&gt;
https://dmarcian.com/dmarc-record/&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 _dmarc                  IN TXT      &amp;quot;v=DMARC1;p=quarantine;rua=mailto:postmaster@example.de&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In dem Fall werden verdächtige Emails in einen Quarantäne-Ordner verschoben und ein Report an den postmaster versandt.&lt;br /&gt;
Für die Integration in Postfix gibt es das Paket opendmarc.&lt;br /&gt;
Implementierung von SPF, DKIM und DMARC in Postfix:&lt;br /&gt;
&lt;br /&gt;
https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/&lt;br /&gt;
(untested)&lt;br /&gt;
&lt;br /&gt;
== Nützliche Commands ==&lt;br /&gt;
Erzeugen eines database files aus einer Textdatei:&lt;br /&gt;
 postmap &amp;lt;filename&amp;gt;&lt;br /&gt;
Alle Configparameter anzeigen:&lt;br /&gt;
  postconf&lt;br /&gt;
Konfigprüfung: &lt;br /&gt;
  postfix check&lt;br /&gt;
Mailqueue anschauen: &lt;br /&gt;
  mailq&lt;br /&gt;
Alle Messages in der Queue ausliefern: &lt;br /&gt;
  postqueue -f&lt;br /&gt;
Nur eine bestimmte Message ausliefern: &lt;br /&gt;
  postqueue -i &amp;lt;ID&amp;gt;&lt;br /&gt;
Message löschen: &lt;br /&gt;
  postsuper -d &amp;lt;ID&amp;gt;&lt;br /&gt;
Alle Messages löschen (!): &lt;br /&gt;
  postsuper -d ALL&lt;br /&gt;
Inhalt einer Message anschauen: &lt;br /&gt;
  postcat -vq &amp;lt;ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Logfile ==&lt;br /&gt;
&lt;br /&gt;
Geloggt wird nach &#039;&#039;/var/log/mail.log&#039;&#039; (alles) bzw. Errors nach &#039;&#039;/var/log/mail.err&#039;&#039; und Warnings nach &#039;&#039;/var/log/mail.warn&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;Protipp: Alias  anlegen:&lt;br /&gt;
 maillog=&#039;tail -f /var/log/mail.log&#039;&lt;br /&gt;
&lt;br /&gt;
== Greylisting und Antispam ==&lt;br /&gt;
&lt;br /&gt;
Zur Bekämpfung der Spamflut gibt es das praktische Programm &#039;&#039;&#039;„Postgrey“&#039;&#039;&#039;. Unter Debian kann dieses als Paket installiert werden. Dieses wird in die main.cf im Abschnitt smtpd_recipient_restrictions eingebunden.&lt;br /&gt;
 smtpd_recipient_restrictions = &lt;br /&gt;
  permit_mynetworks&lt;br /&gt;
  permit_sasl_authenticated&lt;br /&gt;
  permit_tls_clientcerts&lt;br /&gt;
  reject_unauth_destination&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_sender&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_recipient&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_rbl_client bl.spamcop.net&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;check_policy_service inet:127.0.0.1:10023&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
(Die Blacklist &#039;&#039;dnsbl.sorbs.net&#039;&#039; wurde hier außen vor gelassen, da diese so ziemlich alles blockt, z.B. alle yahoo- oder gmx-Adressen.)&lt;br /&gt;
Damit das funktioniert, muss natürlich noch Postgrey selbst an den Start gebracht werden.&lt;br /&gt;
Hierfür wird die Datei &#039;&#039;/etc/default/postgrey&#039;&#039; bearbeitet. Hier ein Beispiel:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;--inet=10023 --auto-whitelist-clients=8  &lt;br /&gt;
 POSTGREY_TEXT=&amp;quot;Busy. Come back in 5 minutes.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Der Service lauscht also auf Port 10023. Im obigen Beispiel wird ein Absender beim 8. erfolgreichen Zustellversuch automatisch gewhitelistet (optionaler Parameter &#039;&#039;--auto-whitelist-clients&#039;&#039;, evtl. Zahl erhöhen oder Parameter ganz weglassen).&lt;br /&gt;
&lt;br /&gt;
Anschließend wechselt man ins Verzeichnis &#039;&#039;/etc/postgrey&#039;&#039;. Dort gibt es 2 Whitelistings. Die Absender stehen in &#039;&#039;&#039;whitelist_clients&#039;&#039;&#039;. Dort stehen bereits IPs und Domains diverser Provider. Man kann dort selbst Einträge hinzufügen (z.B. example.ch).&lt;br /&gt;
&lt;br /&gt;
In der Datei &#039;&#039;&#039;whitelist_recipients&#039;&#039;&#039; kann man alle Empfänger der eigenen Domain eintragen, die auf jeden Fall immer Emails bekommen sollen. z.B. postmaster@, abuse@.&amp;lt;br&amp;gt;&lt;br /&gt;
Beachte: &#039;&#039;&#039;Die Dateien müssen explizit eingesourcet werden&#039;&#039;&#039;, passiert nicht automatisch. Das macht man mit den POSTGREY_OPTS:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;$POSTGREY_OPTS --whitelist-clients=whitelist_clients --whitelist-recipients=/etc/postgrey/whitelist_recipients&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Nach getaner Anpassung, den postgrey-Service (neu)starten.&lt;br /&gt;
  # service postgrey restart&lt;br /&gt;
Überprüfen, ob der Dienst läuft z.B. mit:&lt;br /&gt;
 # lsof -i :10023&lt;br /&gt;
Anschließend Postfix reloaden&lt;br /&gt;
 # postfix reload&lt;br /&gt;
und die Mailbox(en) beobachten, hinsichtlich Spamaufkommen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Quelle: Artikel „Postzusteller“, Admin-Magazin, Ausgabe 03-2013)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Dovecot =&lt;br /&gt;
&lt;br /&gt;
Open Source IMAP-Server zum Einliefern der Emails in Postfächer mittels POP3 oder IMAP bzw. IMAPs. Im folgenden wird nur auf IMAPs eingegangen.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Es empfiehlt sich, den Dovecot auf demselben System zu installieren wie Postfix. Andere Fälle werden hier nicht berücksichtigt.&lt;br /&gt;
&lt;br /&gt;
Installation des imapd mittels&lt;br /&gt;
 # apt install dovecot-imapd&lt;br /&gt;
&lt;br /&gt;
Dies reicht für alle Grundfunktionen der Emailauslieferung. Für erweiterte Optionen wie z.B. Filterfunktion können weiter dovecot-Pakete wie  &#039;&#039;&#039;dovecot-antispam, dovecot-sieve&#039;&#039;&#039; installiert werden.&lt;br /&gt;
&lt;br /&gt;
User (i.d.F. &#039;&#039;vmail&#039;&#039;) als Owner für die Mailboxen anlegen:&lt;br /&gt;
&lt;br /&gt;
 useradd -u 4000 -m -d /home/vmail -s /user/sbin/nologin vmail&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Configdateien in &#039;&#039;/etc/dovecot/conf.d&#039;&#039; anpassen.&lt;br /&gt;
Die Datei &#039;&#039;/etc/dovecot/dovecot.conf&#039;&#039; inkludiert per Default alle Dateien unter conf.d/*.conf.&lt;br /&gt;
&lt;br /&gt;
=== Usermanagement ===&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel, wo User in einer separaten Datei abgelegt werden.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;10-auth.conf:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
disable_plaintext_auth = no&lt;br /&gt;
auth_username_format = %n&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login &lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Wenn kein auth über pam:&lt;br /&gt;
 #!include auth-system.conf.ext&lt;br /&gt;
&lt;br /&gt;
Plaintext Auth kann man erlauben, weil die User-Passwörter als gehashter String übertragen werden. Für die Kommunikation zwischen Postfix und Dovecot spielt das ohnehin keine Rolle, da sich beide Dienste auf einem Server befinden. Der Zugriff von einem MUA aus wird über TLS/SSL erfolgen (s.u.).&lt;br /&gt;
&lt;br /&gt;
master user anlegen (optional):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p supergeheim -s SHA512-CRYPT -u administrator@example.de &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Den Output zusammen mit dem Usernamen in die Datei master-users pasten.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat  ../master-users     &lt;br /&gt;
administrator@example.de:{SHA256-CRYPT}$5$9zrt7/e2CDkPmSuA$SNEkm/L4XZcYFAbYkJp5ESl9u35fVBSd4ukO0dm5yp3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sonstige User anlegen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p strenggeheim -s SHA512-CRYPT -u sunflower@example.de&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
→ /etc/dovecot/users:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sunflower:{SHA256-CRYPT}$5$D3PhhtqUhRXT7cmZ$E5244BpvNafb.9FtbhF9AUfbvw8XpnOJhPyM/q/rRN2:::Sun Flo,,,:/var/mail/example.de/sunflower:/bin/false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Hier sollten keine Abkürzungen wie &#039;&#039;%d&#039;&#039; oder &#039;&#039;%n&#039;&#039; stehen, weil diese nicht (von sieve, s.u.) bzw. nur teilweise (von dovecot) interpretiert werden.&lt;br /&gt;
&lt;br /&gt;
Damit der Account auch Email bekomemn kann, ergänzt man die virtual table im Postfix directory:&lt;br /&gt;
  cat sunflower@example.de  example.de/spambucket &amp;gt;&amp;gt; /etc/postfix/virtual&lt;br /&gt;
Aktivieren mit&lt;br /&gt;
 postmap virtual&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Dateirechte ===&lt;br /&gt;
&lt;br /&gt;
Die Files master-users, users sollten nur von dovecot gelesen werden können!&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chgrp dovecot /etc/dovecot/*users&lt;br /&gt;
# chmod o-r /etc/dovecot/*users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mailbox anlegen und User berechtigen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# maildirmake.dovecot /var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
# chown -R vmail.vmail /var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
User im Postfix anlegen, in den virtual maps, s. o.&lt;br /&gt;
&lt;br /&gt;
Kontrolle:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# doveadm user &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IMAP konfigurieren ===&lt;br /&gt;
Protipp: erstmal conf.d wegsichern:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av /etc/doveconf/conf.d /etc/doveconf/conf.d.orig&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Folgende Konfigurationsdateien in conf.d entsprechend anpassen:&lt;br /&gt;
* &#039;&#039;&#039;10-auth.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth_allow_cleartext = yes&lt;br /&gt;
auth_username_format = %{user|username|lower}&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login&lt;br /&gt;
&lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
!include auth-system.conf.ext&lt;br /&gt;
!include auth-passwdfile.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-mail.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# den parameter mail_location gibt es nicht mehr&lt;br /&gt;
mail_driver = maildir&lt;br /&gt;
mail_home = /var/mail/%{user | domain}/%{user}&lt;br /&gt;
mail_path = %{home}&lt;br /&gt;
namespace inbox { &lt;br /&gt;
 inbox = yes &lt;br /&gt;
} &lt;br /&gt;
mail_uid = 4000 &lt;br /&gt;
mail_gid = 4000&lt;br /&gt;
mail_privileged_group = mail&lt;br /&gt;
protocol !indexer-worker {&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-master.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service imap-login { &lt;br /&gt;
  inet_listener imaps { &lt;br /&gt;
   port = 993 &lt;br /&gt;
   ssl = yes &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service auth { &lt;br /&gt;
 unix_listener auth-userdb { &lt;br /&gt;
   user = vmail  &lt;br /&gt;
   group = vmail &lt;br /&gt;
 } &lt;br /&gt;
 unix_listener /var/spool/postfix/private/auth { &lt;br /&gt;
   mode = 0666 &lt;br /&gt;
   user = postfix &lt;br /&gt;
   group = postfix &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service stats { &lt;br /&gt;
	unix_listener stats-reader { &lt;br /&gt;
               user = vmail &lt;br /&gt;
               group = vmail &lt;br /&gt;
               mode = 0660 &lt;br /&gt;
     } &lt;br /&gt;
&lt;br /&gt;
	unix_listener stats-writer { &lt;br /&gt;
       user = vmail &lt;br /&gt;
       group = vmail &lt;br /&gt;
       mode = 0660 &lt;br /&gt;
   	} &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-ssl.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# (z.B. Postfix certs verwenden)&lt;br /&gt;
ssl = yes&lt;br /&gt;
ssl_server_cert_file = /etc/dovecot/private/dovecot.pem&lt;br /&gt;
ssl_server_key_file = /etc/dovecot/private/dovecot.key&lt;br /&gt;
ssl_min_protocol = TLSv1.2&lt;br /&gt;
&lt;br /&gt;
#ssl_client_ca_dir = /etc/ssl/certs &lt;br /&gt;
#ssl_dh = &amp;lt;/usr/share/dovecot/dh.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Zertifikate generieren: s. https://wiki.nomorebluescreen.de/index.php?title=Webserver_mit_Apache#Alternative_letsencrypt&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
Jedes Mal, wenn das Zertifikat ausgetauscht wird, muss der dovecot-Service neu gestartet werden, damit das neue Zertifikat auch eingelesen wird.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Überprüfen, welche Dateien angefasst wurden:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
diff -quw conf.d.orig conf.d&lt;br /&gt;
Files conf.d.orig/10-ssl.conf and conf.d/10-ssl.conf differ&lt;br /&gt;
Files conf.d.orig/15-lda.conf and conf.d/15-lda.conf differ&lt;br /&gt;
Files conf.d.orig/20-imap.conf and conf.d/20-imap.conf differ&lt;br /&gt;
Files conf.d.orig/20-managesieve.conf and conf.d/20-managesieve.conf differ&lt;br /&gt;
Files conf.d.orig/90-sieve.conf and conf.d/90-sieve.conf differ&lt;br /&gt;
Files conf.d.orig/auth-passwdfile.conf.ext and conf.d/auth-passwdfile.conf.ext differ&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ausgabe der gesamten Config&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
 # doveconf -n&lt;br /&gt;
&lt;br /&gt;
==== Sieve ====&lt;br /&gt;
Engine zum Filtern von Emails&lt;br /&gt;
&lt;br /&gt;
dovecot-sieve und dovecot-managesieved  installieren&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;15-lda.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lda_mailbox_autocreate = yes&lt;br /&gt;
lda_mailbox_autosubscribe = yes&lt;br /&gt;
protocol lda { &lt;br /&gt;
 mail_plugins = $mail_plugins sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;20-managesieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
protocols = $protocols sieve&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;90-sieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
plugin { &lt;br /&gt;
 sieve = file:~/sieve;active=~/.dovecot.sieve &lt;br /&gt;
 sieve_default = /var/lib/dovecot/sieve/default.sieve &lt;br /&gt;
 sieve_global_dir = /var/lib/dovecot/sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Kontrolle, ob der sieve-Service läuft und auf Port 4190 lauscht.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# service dovecot restart&lt;br /&gt;
# ss -plnt | grep 4190&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Nun sollte im Mailhome des Users ein Verzeichnis namens &amp;quot;sieve&amp;quot; sichtbar sein.&lt;br /&gt;
&lt;br /&gt;
=== Transport von Postfix zu Dovecot ===&lt;br /&gt;
&lt;br /&gt;
Dem Postfix muss noch beigebracht werden, dass die Emails zum Dovecot gehen.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;master.cf&#039;&#039;&#039; im Postfix anpassen (die Einträge in den {} gehören so, nicht ersetzen!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail    &lt;br /&gt;
 argv=/usr/lib/dovecot/deliver -a ${recipient} -f ${sender} -d $ &lt;br /&gt;
 {user} @${nexthop} -m ${extension}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
und einen mailbox_command Eintrag in der main.cf vornehmen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mailbox_command = /usr/lib/dovecot/dovecot-lda -f &amp;quot;$SENDER&amp;quot; -a &amp;quot;$RECIPIENT&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(https://doc.dovecot.org/configuration_manual/howto/dovecot_lda_postfix/#howto-dovecot-lda-postfix)&lt;br /&gt;
&lt;br /&gt;
Danach noch postfix und dovecot service restarten.&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
Logfiles gehen ebenfalls (wie postfix) nach /var/log/mail.log &amp;lt;br&amp;gt;&lt;br /&gt;
Nützlicher Alias:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
maillog=&#039;tail -f /var/log/maillog&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debugging einschalten:&lt;br /&gt;
 mail_debug = yes&lt;br /&gt;
in der Datei &lt;br /&gt;
&#039;&#039;10-logging.conf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Protipp:&#039;&#039;&#039;&lt;br /&gt;
Wenn im Log folgender Fehler erscheint:&lt;br /&gt;
&lt;br /&gt;
 Mar 27 08:03:56 aphantopus postfix/pipe[2317]: 521066005D: to=&amp;lt;sunflower@example.de&amp;gt;, relay=dovecot, delay=0.3, delays=0.19/0.04/0/0.07, dsn=2.0.0, status=sent (delivered via dovecot service (lda(sunflower@example.de,)Error: net_connect_unix(/var/run/dovecot/stats-writer) failed: Permission denied))&lt;br /&gt;
&lt;br /&gt;
=&amp;gt; In der &#039;&#039;10-master.conf&#039;&#039; &#039;&#039;&#039;stats&#039;&#039;&#039; für User vmail erlauben (s.o.)&lt;br /&gt;
&lt;br /&gt;
= Roundcube =&lt;br /&gt;
&lt;br /&gt;
Praktisches Webfrontend zum Abholen und Verschicken von Emails&lt;br /&gt;
&lt;br /&gt;
Erst mysql-server installieren, sonst bricht die Installation mit einem Fehler ab&lt;br /&gt;
 # apt install mariadb-server roundcube&lt;br /&gt;
Die dbconfig-common Frage mit „yes“ beantworten, mysql-Passwort setzen.&lt;br /&gt;
Config Datei anpassen (&#039;&#039;/etc/roundcube/config.inc.php&#039;&#039;):&lt;br /&gt;
 $config[&#039;smtp_server&#039;] = &#039;localhost&#039;;&lt;br /&gt;
 $config[&#039;smtp_port&#039;] = 25;&lt;br /&gt;
&lt;br /&gt;
== Plugins ==&lt;br /&gt;
&lt;br /&gt;
Standard-Plugings installieren&lt;br /&gt;
 # apt install roundcube-plugins&lt;br /&gt;
&lt;br /&gt;
Weitere Plugins installieren:&lt;br /&gt;
 # apt install roundcube-plugins-extra git curl composer&lt;br /&gt;
(composer braucht man für die Installation von Plugins, git, weil die meisten aus github kommen)&amp;lt;br&amp;gt;&lt;br /&gt;
Die, die man haben will, in der Datei &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; enablen&lt;br /&gt;
&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
        &#039;compose-addressbook&#039;, &lt;br /&gt;
        &#039;markasjunk2&#039;, &lt;br /&gt;
        &#039;fail2ban&#039; &lt;br /&gt;
 );&lt;br /&gt;
&lt;br /&gt;
Übersicht über die offiziellen Plugins:&lt;br /&gt;
&lt;br /&gt;
https://plugins.roundcube.net/&lt;br /&gt;
&lt;br /&gt;
Plugins, die es nicht als Paket gibt:&amp;lt;br&amp;gt;&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
* README lesen&lt;br /&gt;
* Plugin als zip herunterladen, nach &#039;&#039;/usr/share/roundcube/plugins&#039;&#039; entpacken&lt;br /&gt;
* (evtl. umbenennen)&lt;br /&gt;
* &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; bearbeiten:&lt;br /&gt;
Abschnitt&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
suchen und fehlendes Plugin ergänzen&lt;br /&gt;
&lt;br /&gt;
== Filter Plugin for Sieve ==&lt;br /&gt;
&lt;br /&gt;
Achtung, nicht das Plugin „filter“ verwenden, sondern &#039;&#039;&#039;managesieve&#039;&#039;&#039; (ist Bestandteil des roundcube-plugins Paketes)&lt;br /&gt;
&lt;br /&gt;
Eine Anleitung gibt es hier:&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-code-in-roundcube/ &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-in-roundcube/&lt;br /&gt;
&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Den protocols Parameter nicht in der dovecot.conf editieren, sondern in &lt;br /&gt;
&#039;&#039;20-managesieve.conf&#039;&#039; (s.o.):&lt;br /&gt;
&lt;br /&gt;
 protocols = $protocols sieve&lt;br /&gt;
&lt;br /&gt;
Nun kann man über das Webfrontend Sieve-Filterregeln generieren&lt;br /&gt;
&lt;br /&gt;
Achtung Bug:&amp;lt;br&amp;gt;&lt;br /&gt;
Sieve legt ein sieve-Verzeichnis unter dem Verzeichnis an, das in mail_location definiert ist. Wenn man die emails der User unter &#039;&#039;/var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&#039;&#039; ablegen möchte, wird man folgendes konfigurieren:&lt;br /&gt;
&lt;br /&gt;
 mail_location = maildir:/var/mail/%d/%n&lt;br /&gt;
&lt;br /&gt;
Da dovecot aber %d nicht interpretiert (s.o.), liegt das User maildirectory unter /var/mail/&amp;lt;username&amp;gt;. Sieve interpretiert dagegen %n nicht und legt ein Directory /var/mail/&amp;lt;domain&amp;gt;/%n/sieve an, unter der die roundcube.sieve Datei liegt. Somit greifen alle User auf dieselbe Datei zu, was technisch möglich, securitytechnisch aber fatal ist. Leider keine gute Idee zur Abhilfe bekannt.&lt;br /&gt;
&lt;br /&gt;
== Passwort ändern ==&lt;br /&gt;
Um den Usern die Möglichkeit zu geben, ihr Passwort selbst zu ändern, wird in der &#039;&#039;config.inc.php&#039;&#039; das Plugin enabled:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$config[&#039;plugins&#039;] = array(&lt;br /&gt;
 (...)&lt;br /&gt;
 &#039;password&#039;&lt;br /&gt;
);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Weitere Einstellungen, wenn die User in einem Passwortfile gepflegt werden wie im Kapitel &#039;&#039;&#039;Dovecot&#039;&#039;&#039; beschrieben:&amp;lt;br&amp;gt;&lt;br /&gt;
(wir gehen davon aus, dass die Userpasswörter mit sha512 verschlüsselt werden, s.o.)&lt;br /&gt;
&lt;br /&gt;
 # https://stackoverflow.com/questions/62655236/how-to-enable-password-plugin-on-roundcube&lt;br /&gt;
 $config[&#039;password_algorithm&#039;] = &#039;ssha512&#039;;&lt;br /&gt;
 $config[&#039;password_algorithm_prefix&#039;] = &#039;{SSHA512}&#039;;&lt;br /&gt;
 $config[&#039;password_driver&#039;] = &#039;dovecot_passwdfile&#039;;&lt;br /&gt;
 $config[&#039;password_dovecot_passwdfile_path&#039;] = &#039;/etc/dovecot/users&#039;;&lt;br /&gt;
&lt;br /&gt;
Die users Datei vom dovecot muss dann entsprechend für www-data les- und schreibbar sein:&lt;br /&gt;
 -rw-rw---- 1 dovecot www-data 1240 Dec  2 23:20 /etc/dovecot/users&lt;br /&gt;
&lt;br /&gt;
(Achtung, riskant bei eventueller Kompromittierung des Webservers! Als Alternative überlegen, die dovecot-Passwörter in eine [mysql-]DB auszulagern)&lt;br /&gt;
&lt;br /&gt;
== Identities ändern ==&lt;br /&gt;
&lt;br /&gt;
Normalerweise kann ein User nur mit seiner Absenderadresse senden. Das ist eine sinnvolle Einstellung, aber wer das Feature zu Testzwecken abschalten will, kann folgende Einstellung vornehmen:&lt;br /&gt;
 $config[&#039;identities_level&#039;] = 0;&lt;br /&gt;
Nun kann der User über &amp;quot;Einstellungen&amp;quot; weitere Absender hinzufügen (https://www.servercake.blog/multiple-identities-roundcube/)&lt;br /&gt;
&lt;br /&gt;
(Leider bisher keine Möglichkeit gefunden, dies nur auf (einen) bestimmte(n) User einzuschränken)&lt;br /&gt;
&lt;br /&gt;
== Apache Integration ==&lt;br /&gt;
&lt;br /&gt;
Hier eine Beispielkonfiguration für einen Virtual Host, um die Roundcube-Seite unter https://mail.example.de zu erreichen.&lt;br /&gt;
Weiteres im Kapitel [[Webserver mit Apache|apache]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;VirtualHost *:443&amp;gt; &lt;br /&gt;
    ServerName mail.example.de &lt;br /&gt;
    ServerAdmin postmaster@example.de &lt;br /&gt;
&lt;br /&gt;
    SSLEngine on &lt;br /&gt;
    SSLCertificateFile         /var/lib/dehydrated/certs/mail.example.de/fullchain.pem &lt;br /&gt;
    SSLCertificateKeyFile /var/lib/dehydrated/certs/mail.example.de/privkey.pem &lt;br /&gt;
&lt;br /&gt;
    DocumentRoot /usr/share/roundcube &lt;br /&gt;
&lt;br /&gt;
    # Includes &lt;br /&gt;
    Include /etc/apache2/conf-available/ssl-encryption.conf &lt;br /&gt;
      &lt;br /&gt;
    LogFormat &amp;quot;%h %l %u %t \&amp;quot;%r\&amp;quot; %&amp;gt;s %b \&amp;quot;%{Referer}i\&amp;quot; \&amp;quot;%{User-agent}i\&amp;quot;&amp;quot; vhost_combined &lt;br /&gt;
    CustomLog /var/log/apache2/mail-ssl.log combined &lt;br /&gt;
    ErrorLog /var/log/apache2/mail-ssl-error.log &lt;br /&gt;
&amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Das roundcube-Paket bringt zudem noch eine roundcube.conf mit, die unter /etc/apache2/conf-available/roundcube.conf installiert und aktiviert wird.&lt;br /&gt;
&lt;br /&gt;
=== PHP ===&lt;br /&gt;
Damit der Roundcube überhaupt läuft, muss das php Plugin installiert und aktiviert sein. Passiert unter Debian mittels:&lt;br /&gt;
 # apt install libapache2-mod-php&lt;br /&gt;
&lt;br /&gt;
Etwas performanter ist die Verwendung von &#039;&#039;&#039;php-fpm&#039;&#039;&#039; (https://www.zend.com/blog/apache-phpfpm-modphp).&lt;br /&gt;
 # apt install php-fpm&lt;br /&gt;
&lt;br /&gt;
Der default Upload bei PHP sind dürftige 2 MB. Um diesen z.B. auf 50MB raufzudrehen, muss folgende Datei angefasst werden:&lt;br /&gt;
&amp;lt;pre&amp;gt;VERSION=$(php -v| head -n 1 | awk &#039;{ print $2 }&#039; | sed -e &#039;s|.[[:digit:]]*$||&#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
* modphp:&lt;br /&gt;
 /etc/php/${VERSION}/apache2/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
* php-fpm:&lt;br /&gt;
 /etc/php/${VERSION}/fpm/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
&lt;br /&gt;
= Integration in einen MUA =&lt;br /&gt;
Wer nicht über den (langsamen) Webmailer gehen will, kann natürlich auch einen MUA seiner Wahl verwenden. Hier ein Beispiel.&lt;br /&gt;
&lt;br /&gt;
== Thunderbird==&lt;br /&gt;
&lt;br /&gt;
Einstellungen für Outgoing Server (SMTP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Servername: FQDN des Email-Servers&lt;br /&gt;
Port: 25&lt;br /&gt;
Connection Security: STARTTLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Beim 1. Mal wird man nach seinem Mailbox-Passwort gefragt. Dieses eingeben und speichern.&lt;br /&gt;
&lt;br /&gt;
Server Settings (IMAP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Server Type: IMAP Mail Server&lt;br /&gt;
Server Name: FQDN des Email-Servers&lt;br /&gt;
Port: 993&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
Connection Security: SSL/TLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
(Die restlichen Defaults so belassen oder bei Bedarf anpassen)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot thunderbird1.png|900px]]&lt;br /&gt;
[[File:Screenshot thunderbird2.png|900px]]&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=672</id>
		<title>Emailserver mit Postfix und Dovecot</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=672"/>
		<updated>2026-03-18T22:43:04Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* IMAP konfigurieren */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Postfix =&lt;br /&gt;
&lt;br /&gt;
Postfix ist ein MTA (Mail Transfer Agent), der eine gute Alternative zu anderen gängigen  MTAs (Sendmail, Exim) darstellt, da seine Konfiguration gut lesbar ist. &lt;br /&gt;
In unserem Beispiel soll der MTA mit einem IMAPd (Dovecot) verknüpft werden, so dass  Benutzer eine Mailbox direkt auf dem System haben. Das Abholen der Mails erfolgt per IMAPs.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Zunächst muss das Paket „postfix“ installiert werden.&lt;br /&gt;
Dabei sind noch ein paar Fragen zu beantworten:&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Art des Servers:	&#039;&#039;&#039;Internet Site&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Root and postmaster mail recipient:&#039;&#039;&#039; ein Postfach eintragen, z.B.postmaster@example.de &amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Other destinations to accept mail for (blank for none):&#039;&#039;&#039; z.B. mail.example.de, localhost, $mydomain (kann man erstmal die defaults belassen)&lt;br /&gt;
* &#039;&#039;&#039;Force synchronous updates on mail queue:&#039;&#039;&#039; no&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Local subnets:&#039;&#039;&#039; 127.0.0.1/8, 192.168.63.0/24 (hier das eigene Netz ergänzen)&lt;br /&gt;
* &#039;&#039;&#039;Mailbox size limit:&#039;&#039;&#039; 0 (unbegrenzt)&lt;br /&gt;
* &#039;&#039;&#039;Local address extension character:&#039;&#039;&#039;  + (i.a. als default ausreichend)&lt;br /&gt;
* &#039;&#039;&#039;Internet protocols to use:&#039;&#039;&#039;  all (wenn man nicht explizit ipv4 oder ipv6 sprechen will)&lt;br /&gt;
&lt;br /&gt;
Diese Einstellungen lassen sich jederzeit mit&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # dpkg-reconfigure postfix&amp;lt;/console&amp;gt;&lt;br /&gt;
ändern.&lt;br /&gt;
&lt;br /&gt;
Alle relevanten Dateien befinden sich  im Verzeichnis &#039;&#039;/etc/postfix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Bevor wir zur Postfixconfig kommen, überprüfen wir den Inhalt der Datei /etc/mailname:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 $ cat /etc/mailname&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Dort darf &#039;&#039;&#039;nur der Domainname&#039;&#039;&#039; stehen, nicht der Hostname (e.g. example.com). Andernfalls kann das Auswirkungen auf den Emailversand haben, v.a. wenn in der main.cf (s.u.) auf die Datei referenziert wird.&lt;br /&gt;
&lt;br /&gt;
Die wichtigeste Datei zum Anpassen ist zunächst die &#039;&#039;&#039;main.cf&#039;&#039;&#039;. Hier ein Beispiel für den Server „mx“ in der Domain example.de. Folgende Parameter sollten konfiguriert sein (exemplarisch):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 myhostname&lt;br /&gt;
 mydomain&lt;br /&gt;
 myorigin&lt;br /&gt;
 mydestination&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Meistens gibt es schon ein paar brauchbare defaults. Der Parameter &#039;&#039;mynetworks&#039;&#039; erlaubt es bestimmten Netzen, Emails ohne weitere Einschränkungen einzuliefern.&lt;br /&gt;
&lt;br /&gt;
Beispielconfig:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
myhostname = mx01.example.de &lt;br /&gt;
mydomain = example.de &lt;br /&gt;
myorigin = $mydomain &lt;br /&gt;
mydestination = $myhostname, localhost, localhost.$mydomain&lt;br /&gt;
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.99.0/24 [2001:780:11b::/48] 214.94.24.154 [2004:780:8:0:5ff0:c5ff:fe09:98f9]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vor allem der Parameter &#039;&#039;&#039;mynetworks&#039;&#039;&#039; sollte mit Bedacht gewählt werden, denn dies sind alle Hosts und Netze, von denen jegliche Emails angenommen werden (auch von &#039;&#039;und&#039;&#039; nach Fremddomains). Fehlkonfigurationen führen hier schnell zum [https://practical365.com/what-is-an-open-relay OpenRelay].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Beachte:&#039;&#039;&#039; ipv6-Adressen müssen in [eckigen Klammern] geschrieben werden, sonst werden keine Emails ausgeliefert!&lt;br /&gt;
Fehler im Logfile:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
postfix/smtpd[21928]: warning: smtpd_client_event_limit_exceptions: 2a01:40f8:c013:5050::0/64: table lookup problem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Nach jeder Änderung ist der Dienst zu reloaden mit dem Befehl&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ob der Restart ordentlich funktioniert hat, kann man z.B. anhand des Logs überprüfen:&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # tail /var/log/mail.log&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== master.cf ===&lt;br /&gt;
Das Kernstück des Postfixdaemons. Hier werden die Transports festgelegt&lt;br /&gt;
Bedeutung der Spalten:&lt;br /&gt;
* service-Feld: Name des Dienstes (smtp, local, procmail, ...) (str)&lt;br /&gt;
* typ-Feld: Verbindungstyp (inet, fifo, unix) (str)&lt;br /&gt;
* Zugriffsrecht: Zugriff auch für externe Programme (default: y) (bool)&lt;br /&gt;
* unpriv-Feld:  Start als unprivilegierter Benutzer (default: y) oder root (n) (bool)&lt;br /&gt;
* chroot: Soll der Dienst in einer chroot-Umgebung gestartet werden (default: y) (bool)&lt;br /&gt;
* wakeup-Feld: Sekunden zwischen 2 Aufrufen (default: 0) (int)&lt;br /&gt;
* Prozessmaximum: Wie viele Prozesse maximal gleichzeitig (default: 50) (int)&lt;br /&gt;
Danach erfolgt ein Kommando mit Flags und Parametern (optional).&lt;br /&gt;
&lt;br /&gt;
== TLS ==&lt;br /&gt;
Optional kann man mit Zertifikaten verschlüsselte Übertragung von Emails konfigurieren. das funktioniert aber nur dann, wenn der Mailserver der Gegenstelle das Zertifikat auch einbindet. Man kann das Zertifikat auch in einen Mailclient einbinden (s. später).&lt;br /&gt;
Die Zertifikatserzeugung kann mit mit [https://letsencrypt.org letsencrypt] erfolgen. Clients zur Zertifikatserzeugung sind [[Webserver_mit_Apache#Alternative_letsencrypt | certbot ]] oder [[Webserver_mit_Apache#Dehydrated | dehydrated]].&lt;br /&gt;
&lt;br /&gt;
=== Zertifikatsgenerierung in Kürze ===&lt;br /&gt;
 # echo $HOSTNAME &amp;gt; /etc/dehydrated/domains.txt&lt;br /&gt;
 # dehydrated –register –accept-terms&lt;br /&gt;
 # dehydrated -c&lt;br /&gt;
&lt;br /&gt;
Dies setzt allerdings einen [[Webserver_mit_Apache | Webserver]] voraus, der auf Port 80 lauscht. Gibt es diesen nicht, kann mal alternativ letsencrypt via DNS verwenden (https://letsencrypt.org/docs/challenge-types).&lt;br /&gt;
&lt;br /&gt;
=== Alternative eigene CA (nicht empfohlen) ===&lt;br /&gt;
&lt;br /&gt;
Wer unbedingt eine eigene CA betreiben will, kann das mit folgender Anleitung tun. Achtung: Das Vorgehen sollte nur gewählt werden, wenn ein zwingender Grund dafür besteht. Viele Browser und MUAs haben Probleme damit, erzeugen hässliche Warnings oder lassen die Seite nicht zu.&lt;br /&gt;
&lt;br /&gt;
==== CA erstellen ====&lt;br /&gt;
Wenn noch kein Zertifikat vorhanden ist, kann man sich selbst eines erstellen oder einen CSR (Certificate Signing Request) erstellen und diesen an eine offizielle CA schicken. Soll ein kommerziell genutzter Mailserver entstehen, ist dies der realistische Weg.&lt;br /&gt;
&lt;br /&gt;
Achtung: Dieser Schritt wird nicht gebraucht, wenn es schon eine CA gibt.&lt;br /&gt;
&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
Key erstellen (+Passwort dafür vergeben), Zertifkatsrequest für die CA erstellen, CA erstellen&lt;br /&gt;
&lt;br /&gt;
 $ openssl genrsa -out ca.key -des3 4096&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -x509 -days 3650 -key ca.key -out ca.crt&lt;br /&gt;
&lt;br /&gt;
 Enter pass phrase for ca.key: &lt;br /&gt;
 You are about to be asked to enter information that will be incorporated &lt;br /&gt;
 into your certificate request. &lt;br /&gt;
 What you are about to enter is what is called a Distinguished Name or a DN. &lt;br /&gt;
 There are quite a few fields but you can leave some blank &lt;br /&gt;
 For some fields there will be a default value, &lt;br /&gt;
 If you enter &#039;.&#039;, the field will be left blank. &lt;br /&gt;
 ----- &lt;br /&gt;
 Country Name (2 letter code) [AU]:DE &lt;br /&gt;
 State or Province Name (full name) [Some-State]:Bavaria &lt;br /&gt;
 Locality Name (eg, city) []:Nuernberg &lt;br /&gt;
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:example.de &lt;br /&gt;
 Organizational Unit Name (eg, section) []:Hostmaster &lt;br /&gt;
 Common Name (e.g. server FQDN or YOUR name) []:*.example.de &lt;br /&gt;
 Email Address []:postmaster@example.de&lt;br /&gt;
&lt;br /&gt;
==== Zertifikat mit der neuen CA erstellen ====&lt;br /&gt;
Schritte:&lt;br /&gt;
    • Key erstellen&lt;br /&gt;
    • Request erstellen &lt;br /&gt;
    • Zertifikat erstellen und signen&lt;br /&gt;
&lt;br /&gt;
  $ openssl genrsa -out mx.example.de.key 4096&lt;br /&gt;
(kein Passwort festlegen)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -key mx.example.de.key -out mx.example.de.csr&lt;br /&gt;
(wieder das Formular ausfüllen as usual)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl x509 -req -days 365 -in mx.example.de.csr -CA  ca.crt -CAkey ca.key -set_serial 01 -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
Beim Erneuern des Zertifikats fallen die Schritte „Erstellen der CA“ und Erzeugen des Keys weg. Ferner muss man auch keine Serial mehr angeben.&lt;br /&gt;
Der Renew-Befehl lautet also folgendermaßen:&lt;br /&gt;
 $ openssl x509 -req -days 730 -in mx.example.de.csr -CA ca.crt -CAkey ca.key -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
=== Einbinden in die Config-Datei ===&lt;br /&gt;
Dieser Schritt gilt wieder für alle Zertifikate, egal wie sie erzeugt wurden. Die Pfade müssen natürlich entsprechend angepasst werden,&lt;br /&gt;
&lt;br /&gt;
Zertifikate an die entsprechende Stelle kopieren und in der Konfig einbinden:&lt;br /&gt;
&lt;br /&gt;
 smtpd_tls_cert_file=/etc/ssl/certs/mx.example.de.crt &lt;br /&gt;
 smtpd_tls_key_file=/etc/ssl/private/mx.example.de.key &lt;br /&gt;
 smtpd_tls_CAfile=/etc/postfix/ca.crt &lt;br /&gt;
 smtpd_use_tls=yes &lt;br /&gt;
 smtpd_tls_session_cache_database = btree: &lt;br /&gt;
   ${data_directory}/smtpd_cache&lt;br /&gt;
 smtp_tls_session_cache_database = btree:  &lt;br /&gt;
   ${data_directory}/smtp_scache&lt;br /&gt;
&lt;br /&gt;
Der Parameter smtpd_tls_Cafile entfällt, wenn eine offizielle CA vorhanden ist (z.B. letsencrypt).&amp;lt;br&amp;gt;&lt;br /&gt;
Die Pfade zu den Zertifikaten können abweichen, bei letsencrypt liegen diese z.B. unter &#039;&#039;/var/lib/dehydrated/certs/&#039;&#039;.&lt;br /&gt;
Überprüfung:&lt;br /&gt;
 $ openssl s_client -connect mx.example.de:25 -starttls smtp&lt;br /&gt;
oder https://www.checktls.com/&lt;br /&gt;
&lt;br /&gt;
Ergänzung:&lt;br /&gt;
Manche MTAs wollen ein Zertifikat in pfx-Form. Dieses kann man aus dem cert file wie folgt erzeugen:&lt;br /&gt;
 $ openssl pkcs12 -export -out mx.example.de.pfx -inkey mx.example.de.key -in mx.example.de.crt&lt;br /&gt;
pfx-File und Passwort dem User zukommen lassen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
[[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]] „vergisst“ manchmal das neue Zertifikat und behält die alte Version, d.h. die meisten Mailclients spoolen dann keine neuen Emails mehr. In diesem Fall den Dovecot Service neu starten.&lt;br /&gt;
&lt;br /&gt;
== SASL ==&lt;br /&gt;
&lt;br /&gt;
Zur Vermeidung eines Open Relays ist dringend anzuraten, per default nur das Einliefern mit dem Absender @example.de von bestimmten Netzen zu erlauben. Dieses passiert mit dem Parameter &#039;&#039;mynetworks&#039;&#039; (s.o.).&lt;br /&gt;
Nun kann es natürlich passieren, dass Benutzer von einem Mailclient irgendwo im Internet Mails verschicken wollen. Diese wären laut Konfig nicht berechtigt. Da die meisten PCs mit dynamischen Adressen im Internet unterwegs sind, macht es hier auch keinen Sinn, die jeweilige IP-Adresse in der Konfig zu ergänzen.&amp;lt;br&amp;gt;&lt;br /&gt;
Das Problem kann umgangen werden, indem Emails versenden dann erlaubt wird, wenn sich der Benutzer einmal erfolgreich am IMAP-Server authentifiziert hat.&lt;br /&gt;
Hierfür gibt es SASL. Die entsprechenden Eintragungen in der main.cf sind:&lt;br /&gt;
 smtpd_relay_restrictions = permit_mynetworks  &lt;br /&gt;
 permit_sasl_authenticated defer_unauth_destination&lt;br /&gt;
 smtpd_sasl_auth_enable = yes &lt;br /&gt;
 smtpd_sasl_type = dovecot &lt;br /&gt;
 smtpd_sasl_local_domain = $mydomain &lt;br /&gt;
 smtpd_sasl_security_options = noanonymous &lt;br /&gt;
 smtpd_sasl_path = private/auth&lt;br /&gt;
&lt;br /&gt;
Im Mailclient äußert sich das Verhalten so, dass man beim ersten Senden einer Nachricht sein Mailbox-Passwort angeben muss.&lt;br /&gt;
Bevor dieses Feature aktiviert wird, muss es einen IMAP-Server geben (s.  [[#Dovecot|nächstes Kapitel]] ).&lt;br /&gt;
&lt;br /&gt;
== Maps ==&lt;br /&gt;
&lt;br /&gt;
Um besser unterscheiden zu können, was mit welchen Absender-/Zieladressen passiert, wird die Konfiguration in sogenannte „Maps“ aufgeteilt. Diese können als Klartext-File oder als Berkley DB vorliegen. In letzterem Fall müssen diese mit dem Kommando &#039;&#039;&#039;postmap&#039;&#039;&#039; nach jeder Bearbeitung umgewandelt werden.&amp;lt;br&amp;gt;&lt;br /&gt;
Ausnahme: Die Datei /etc/aliases.db (nur relevant für lokale Emailauslieferung) wird mit dem Kommando &#039;&#039;&#039;postalias&#039;&#039;&#039; oder &#039;&#039;&#039;newaliases&#039;&#039;&#039; generiert.&lt;br /&gt;
 alias_maps = hash:/etc/aliases &lt;br /&gt;
 alias_database = hash:/etc/aliases &lt;br /&gt;
Hier werden aliase eingerichtet, die auf eine andere Mailbox mappen. Beispiel:&lt;br /&gt;
 postmaster:    root&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
 smtpd_sender_restrictions = hash:/etc/postfix/access &lt;br /&gt;
Hier können für Aktionen für spezielle Absenderadressen eingerichtet werden. Beispiel:&lt;br /&gt;
 example.com     DISCARD&lt;br /&gt;
&lt;br /&gt;
=== Relocated ===&lt;br /&gt;
 relocated_maps = hash:/etc/postfix/relocated &lt;br /&gt;
&lt;br /&gt;
Abweisen der Mail mit einem Hinweis. Beispiel:&lt;br /&gt;
 testy.test      &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Ergebnis:&lt;br /&gt;
 &amp;lt;testy.test@example.de&amp;gt;: Recipient address rejected: User has &lt;br /&gt;
 moved to &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de&lt;br /&gt;
 senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Canonical ===&lt;br /&gt;
==== Sender ====&lt;br /&gt;
 sender_canonical_maps = hash:/etc/postfix/sender_canonical &lt;br /&gt;
&lt;br /&gt;
Bestimmte Adressen werden auf ein übliches Standardformat umgeschrieben:&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   petra.sonne@example.de&lt;br /&gt;
 phun@work.de		   peter.hun@example.de&lt;br /&gt;
&lt;br /&gt;
==== Recipient ====&lt;br /&gt;
 recipient_canonical_maps = hash:/etc/postfix/recipient_canonical &lt;br /&gt;
Arbeitet genauso wie sender_canonical, nur für Empfängeradressen.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Mailbox ===&lt;br /&gt;
 virtual_mailbox_maps = hash:/etc/postfix/virtual&lt;br /&gt;
Locations der Mailboxen des imap-Servers (näheres unter [[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]])&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   	example.de/sunflower/ &lt;br /&gt;
 testy@example.de		example.de/testy &lt;br /&gt;
 test@example.de		example.de/test &lt;br /&gt;
 lmaa@ihr-koennt-mich-alle.de	ihr-koennt-mich-alle.de/lmaa&lt;br /&gt;
&lt;br /&gt;
=== Virtual Aliases ===&lt;br /&gt;
virtual_alias_maps = hash:/etc/postfix/virtual_maps&lt;br /&gt;
&lt;br /&gt;
Adressen die auf andere Adressen umgebogen werden (ähnlich wie die aliases), kann auch domainübergreifend passieren.&lt;br /&gt;
So können mehrere Empfängeradressen in dieselbe Mailbox laufen.&lt;br /&gt;
&lt;br /&gt;
 anrufbeantworter@example.de      sunflower@example.de,H.Hirsch@gmx.de,harry1999@yahoo.de&lt;br /&gt;
 info@example.de        sunflower@example.de&lt;br /&gt;
 postmaster    root &lt;br /&gt;
 webmaster     root &lt;br /&gt;
 administrator root&lt;br /&gt;
 root    sunflower&lt;br /&gt;
fortune:  fortune&lt;br /&gt;
Letzteres ist eine Pipe. Dazu später mehr.&lt;br /&gt;
&lt;br /&gt;
Umwandeln von Text in DB-File und in Postfix einlesen:&lt;br /&gt;
 postmap &amp;lt;aliases|access|canonical|...&amp;gt;&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Einfaches Beispiel: Emails von einer Domain auf eine andere weiterleiten ===&lt;br /&gt;
Nehmen wir an, wir haben einen Emailserver1 in der Domain example.com. Dieser soll alle Email die an &amp;lt;userXY&amp;gt;@example.com eintreffen, an &amp;lt;userXY&amp;gt;@example.de weiterleiten. Auch hier ist eine Eintrag in der o.g. &#039;&#039;virtual_maps&#039;&#039; Datei nötig:&lt;br /&gt;
 @example.com	@example.de&lt;br /&gt;
Nun werden alle example.com-Emails zum zuständigen Emailserver für example.de weitergeleitet. Der user part bleibt unverändert.&lt;br /&gt;
&lt;br /&gt;
=== Transports ===&lt;br /&gt;
Transports sind die Art und Weise, wie eine eingehende Mail behandelt wird, z.B. lokal in eine Datei speichern, an einen imap-Server weiterreichen oder ein Script ausführen.&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel: &amp;lt;br&amp;gt;&lt;br /&gt;
Wenn auf eine bestimmte Adresse geschickt wird, soll ein Script ausgeführt werden, das dem Absender einen Zufallsspruch zurücksendet &#039;&#039;&#039;und&#039;&#039;&#039; die Mail gleichzeitig in ein Postfach einliefert.&lt;br /&gt;
Schritte:&lt;br /&gt;
&lt;br /&gt;
1. Alias definieren (virtual_maps):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de           fortune&lt;br /&gt;
&lt;br /&gt;
2. Alias auf einen Transport  mappen (transports):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de     randomphrase:&lt;br /&gt;
&lt;br /&gt;
3. Transport definieren (master.cf):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 randomphrase      unix  - n n - - pipe &lt;br /&gt;
  flags=h user=vmail:vmail argv=/usr/local/bin/randomphrase.pl ${sender}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(Den User vmail muss es natürlich in der passwd geben, z.B. so:&lt;br /&gt;
 vmail:x:4000:4000::/home/vmail:/user/sbin/nologin&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
4. Script hinterlegen:&lt;br /&gt;
 /usr/local/bin/randomphrase.pl&lt;br /&gt;
für alle ausführbar machen&lt;br /&gt;
&lt;br /&gt;
Mit dem Script [[ randomphrase.pl ]] wird ein Zufallsspruch erzeugt. Dafür muss das Paket &#039;&#039;fortune-mod&#039;&#039; installiert sein. &lt;br /&gt;
Zum Weiterschicken der Email wird das Script /usr/local/bin/deliver_mail.sh aufgerufen. ([[File:Deliver_mail.sh]])&lt;br /&gt;
&amp;lt;br&amp;gt; Hierfür muss der User vmail in der Datei &#039;&#039;/etc/sudoers.d/vmail&#039;&#039; berechtigt werden:&lt;br /&gt;
 vmail ALL=(root) NOPASSWD: /usr/local/bin/deliver_mail&lt;br /&gt;
&lt;br /&gt;
Eine Email an die Adresse fortune@example.de erzeugt nun eine Antwort an die Absenderadresse mit einem Zufallsspruch.&lt;br /&gt;
&lt;br /&gt;
== Multidomain ==&lt;br /&gt;
&lt;br /&gt;
Natürlich kann Postfix auch Emails für mehrere Domains annehmen. Dafür gibt es den Parameter „virtual_mailbox_domains“:&lt;br /&gt;
&lt;br /&gt;
 virtual_mailbox_domains = example.de  example.com  ihr-koennt-mich-alle.de&lt;br /&gt;
Die Variable $mydomain sollte dann aus mydestination entfernt werden.&lt;br /&gt;
&lt;br /&gt;
== Special DNS Records ==&lt;br /&gt;
=== SPF (Sender Policy Framework) ===&lt;br /&gt;
Mit einem RR-Type TXT kann man eine Liste von Emailservern definieren, die als Absender die Emaildomain verwenden dürfen. Generiert jemand eine Fakeemail von einem anderen System aus, kann diese  abgewiesen werden.&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 IN TXT  &amp;quot;v=spf1 mx:example.de a:foo.example.de ip4:8.15.47.11/32 ip6:2008:15:5:47::11/48 ip6:2008:15:5:47::12/48 -all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Howto: &amp;lt;br&amp;gt;&lt;br /&gt;
https://dmarcian.com/create-spf-record/ &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.open-spf.org/SPF_Record_Syntax/&lt;br /&gt;
&lt;br /&gt;
SPF in Postfix integrieren:&lt;br /&gt;
&lt;br /&gt;
Nun ist die Domain vor Missbrauch vor Fakeeemails geschützt. Jetzt gibt es aber noch die andere Seite zu beachten. Postfix soll ebenfalls die SPF-Records anderer Emaildomains prüfen und die Email ggf ablehnen.&lt;br /&gt;
https://makeityourway.de/enabling-spf-sender-policy-framework-checking-on-postfix/&lt;br /&gt;
&lt;br /&gt;
Hier in Kürze zusammengefasst, was es zu beachten gibt:&lt;br /&gt;
 # apt install postfix-policyd-spf-python&lt;br /&gt;
Die Config-Datei &#039;&#039;/etc/postfix-policyd-spf-python/policyd-spf.conf&#039;&#039; liefert bereits brauchbare Defaults, optional kann man noch eine Whitelist ergänzen z.B.&lt;br /&gt;
&lt;br /&gt;
 Domain_Whitelist = example.com &lt;br /&gt;
In der master.cf ergänzen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 policyd-spf  unix  -     n       n       -       -       spawn &lt;br /&gt;
  user=policyd-spf argv=/usr/bin/policyd-spf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In der main.cf ergänzen:&lt;br /&gt;
 smtpd_recipient_restrictions =&lt;br /&gt;
 (...)&lt;br /&gt;
 check_policy_service unix:private/policyd-spf&lt;br /&gt;
 (…)&lt;br /&gt;
&#039;&#039;&#039;Achtung:&#039;&#039;&#039; Wenn es schon einen check_policy_service Eintrag gibt, &#039;&#039;&#039;keinesfalls&#039;&#039;&#039; einen weiteren Eintrag anhängen, sondern eine neue Zeile aufmachen!&lt;br /&gt;
 policy-spf_time_limit = 3600s&lt;br /&gt;
&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&lt;br /&gt;
Ein paar Testemails einkippen und mail.log gucken.&lt;br /&gt;
&lt;br /&gt;
=== DMARC (Domain based Message Authentication, Reporting and Conformance) ===&lt;br /&gt;
https://dmarcian.com/dmarc-record/&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 _dmarc                  IN TXT      &amp;quot;v=DMARC1;p=quarantine;rua=mailto:postmaster@example.de&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In dem Fall werden verdächtige Emails in einen Quarantäne-Ordner verschoben und ein Report an den postmaster versandt.&lt;br /&gt;
Für die Integration in Postfix gibt es das Paket opendmarc.&lt;br /&gt;
Implementierung von SPF, DKIM und DMARC in Postfix:&lt;br /&gt;
&lt;br /&gt;
https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/&lt;br /&gt;
(untested)&lt;br /&gt;
&lt;br /&gt;
== Nützliche Commands ==&lt;br /&gt;
Erzeugen eines database files aus einer Textdatei:&lt;br /&gt;
 postmap &amp;lt;filename&amp;gt;&lt;br /&gt;
Alle Configparameter anzeigen:&lt;br /&gt;
  postconf&lt;br /&gt;
Konfigprüfung: &lt;br /&gt;
  postfix check&lt;br /&gt;
Mailqueue anschauen: &lt;br /&gt;
  mailq&lt;br /&gt;
Alle Messages in der Queue ausliefern: &lt;br /&gt;
  postqueue -f&lt;br /&gt;
Nur eine bestimmte Message ausliefern: &lt;br /&gt;
  postqueue -i &amp;lt;ID&amp;gt;&lt;br /&gt;
Message löschen: &lt;br /&gt;
  postsuper -d &amp;lt;ID&amp;gt;&lt;br /&gt;
Alle Messages löschen (!): &lt;br /&gt;
  postsuper -d ALL&lt;br /&gt;
Inhalt einer Message anschauen: &lt;br /&gt;
  postcat -vq &amp;lt;ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Logfile ==&lt;br /&gt;
&lt;br /&gt;
Geloggt wird nach &#039;&#039;/var/log/mail.log&#039;&#039; (alles) bzw. Errors nach &#039;&#039;/var/log/mail.err&#039;&#039; und Warnings nach &#039;&#039;/var/log/mail.warn&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;Protipp: Alias  anlegen:&lt;br /&gt;
 maillog=&#039;tail -f /var/log/mail.log&#039;&lt;br /&gt;
&lt;br /&gt;
== Greylisting und Antispam ==&lt;br /&gt;
&lt;br /&gt;
Zur Bekämpfung der Spamflut gibt es das praktische Programm &#039;&#039;&#039;„Postgrey“&#039;&#039;&#039;. Unter Debian kann dieses als Paket installiert werden. Dieses wird in die main.cf im Abschnitt smtpd_recipient_restrictions eingebunden.&lt;br /&gt;
 smtpd_recipient_restrictions = &lt;br /&gt;
  permit_mynetworks&lt;br /&gt;
  permit_sasl_authenticated&lt;br /&gt;
  permit_tls_clientcerts&lt;br /&gt;
  reject_unauth_destination&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_sender&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_recipient&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_rbl_client bl.spamcop.net&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;check_policy_service inet:127.0.0.1:10023&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
(Die Blacklist &#039;&#039;dnsbl.sorbs.net&#039;&#039; wurde hier außen vor gelassen, da diese so ziemlich alles blockt, z.B. alle yahoo- oder gmx-Adressen.)&lt;br /&gt;
Damit das funktioniert, muss natürlich noch Postgrey selbst an den Start gebracht werden.&lt;br /&gt;
Hierfür wird die Datei &#039;&#039;/etc/default/postgrey&#039;&#039; bearbeitet. Hier ein Beispiel:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;--inet=10023 --auto-whitelist-clients=8  &lt;br /&gt;
 POSTGREY_TEXT=&amp;quot;Busy. Come back in 5 minutes.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Der Service lauscht also auf Port 10023. Im obigen Beispiel wird ein Absender beim 8. erfolgreichen Zustellversuch automatisch gewhitelistet (optionaler Parameter &#039;&#039;--auto-whitelist-clients&#039;&#039;, evtl. Zahl erhöhen oder Parameter ganz weglassen).&lt;br /&gt;
&lt;br /&gt;
Anschließend wechselt man ins Verzeichnis &#039;&#039;/etc/postgrey&#039;&#039;. Dort gibt es 2 Whitelistings. Die Absender stehen in &#039;&#039;&#039;whitelist_clients&#039;&#039;&#039;. Dort stehen bereits IPs und Domains diverser Provider. Man kann dort selbst Einträge hinzufügen (z.B. example.ch).&lt;br /&gt;
&lt;br /&gt;
In der Datei &#039;&#039;&#039;whitelist_recipients&#039;&#039;&#039; kann man alle Empfänger der eigenen Domain eintragen, die auf jeden Fall immer Emails bekommen sollen. z.B. postmaster@, abuse@.&amp;lt;br&amp;gt;&lt;br /&gt;
Beachte: &#039;&#039;&#039;Die Dateien müssen explizit eingesourcet werden&#039;&#039;&#039;, passiert nicht automatisch. Das macht man mit den POSTGREY_OPTS:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;$POSTGREY_OPTS --whitelist-clients=whitelist_clients --whitelist-recipients=/etc/postgrey/whitelist_recipients&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Nach getaner Anpassung, den postgrey-Service (neu)starten.&lt;br /&gt;
  # service postgrey restart&lt;br /&gt;
Überprüfen, ob der Dienst läuft z.B. mit:&lt;br /&gt;
 # lsof -i :10023&lt;br /&gt;
Anschließend Postfix reloaden&lt;br /&gt;
 # postfix reload&lt;br /&gt;
und die Mailbox(en) beobachten, hinsichtlich Spamaufkommen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Quelle: Artikel „Postzusteller“, Admin-Magazin, Ausgabe 03-2013)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Dovecot =&lt;br /&gt;
&lt;br /&gt;
Open Source IMAP-Server zum Einliefern der Emails in Postfächer mittels POP3 oder IMAP bzw. IMAPs. Im folgenden wird nur auf IMAPs eingegangen.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Es empfiehlt sich, den Dovecot auf demselben System zu installieren wie Postfix. Andere Fälle werden hier nicht berücksichtigt.&lt;br /&gt;
&lt;br /&gt;
Installation des imapd mittels&lt;br /&gt;
 # apt install dovecot-imapd&lt;br /&gt;
&lt;br /&gt;
Dies reicht für alle Grundfunktionen der Emailauslieferung. Für erweiterte Optionen wie z.B. Filterfunktion können weiter dovecot-Pakete wie  &#039;&#039;&#039;dovecot-antispam, dovecot-sieve&#039;&#039;&#039; installiert werden.&lt;br /&gt;
&lt;br /&gt;
User (i.d.F. &#039;&#039;vmail&#039;&#039;) als Owner für die Mailboxen anlegen:&lt;br /&gt;
&lt;br /&gt;
 useradd -u 4000 -m -d /home/vmail -s /user/sbin/nologin vmail&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Configdateien in &#039;&#039;/etc/dovecot/conf.d&#039;&#039; anpassen.&lt;br /&gt;
Die Datei &#039;&#039;/etc/dovecot/dovecot.conf&#039;&#039; inkludiert per Default alle Dateien unter conf.d/*.conf.&lt;br /&gt;
&lt;br /&gt;
=== Usermanagement ===&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel, wo User in einer separaten Datei abgelegt werden.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;10-auth.conf:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
disable_plaintext_auth = no&lt;br /&gt;
auth_username_format = %n&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login &lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Wenn kein auth über pam:&lt;br /&gt;
 #!include auth-system.conf.ext&lt;br /&gt;
&lt;br /&gt;
Plaintext Auth kann man erlauben, weil die User-Passwörter als gehashter String übertragen werden. Für die Kommunikation zwischen Postfix und Dovecot spielt das ohnehin keine Rolle, da sich beide Dienste auf einem Server befinden. Der Zugriff von einem MUA aus wird über TLS/SSL erfolgen (s.u.).&lt;br /&gt;
&lt;br /&gt;
master user anlegen (optional):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p supergeheim -s SHA512-CRYPT -u administrator@example.de &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Den Output zusammen mit dem Usernamen in die Datei master-users pasten.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat  ../master-users     &lt;br /&gt;
administrator@example.de:{SHA256-CRYPT}$5$9zrt7/e2CDkPmSuA$SNEkm/L4XZcYFAbYkJp5ESl9u35fVBSd4ukO0dm5yp3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sonstige User anlegen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p strenggeheim -s SHA512-CRYPT -u sunflower@example.de&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
→ /etc/dovecot/users:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sunflower:{SHA256-CRYPT}$5$D3PhhtqUhRXT7cmZ$E5244BpvNafb.9FtbhF9AUfbvw8XpnOJhPyM/q/rRN2:::Sun Flo,,,:/var/mail/example.de/sunflower:/bin/false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Hier sollten keine Abkürzungen wie &#039;&#039;%d&#039;&#039; oder &#039;&#039;%n&#039;&#039; stehen, weil diese nicht (von sieve, s.u.) bzw. nur teilweise (von dovecot) interpretiert werden.&lt;br /&gt;
&lt;br /&gt;
Damit der Account auch Email bekomemn kann, ergänzt man die virtual table im Postfix directory:&lt;br /&gt;
  cat sunflower@example.de  example.de/spambucket &amp;gt;&amp;gt; /etc/postfix/virtual&lt;br /&gt;
Aktivieren mit&lt;br /&gt;
 postmap virtual&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Dateirechte ===&lt;br /&gt;
&lt;br /&gt;
Die Files master-users, users sollten nur von dovecot gelesen werden können!&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chgrp dovecot /etc/dovecot/*users&lt;br /&gt;
# chmod o-r /etc/dovecot/*users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mailbox anlegen und User berechtigen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# maildirmake.dovecot /var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
# chown -R vmail.vmail /var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
User im Postfix anlegen, in den virtual maps, s. o.&lt;br /&gt;
&lt;br /&gt;
Kontrolle:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# doveadm user &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IMAP konfigurieren ===&lt;br /&gt;
Protipp: erstmal conf.d wegsichern:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av /etc/doveconf/conf.d /etc/doveconf/conf.d.orig&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Folgende Konfigurationsdateien in conf.d entsprechend anpassen:&lt;br /&gt;
* &#039;&#039;&#039;10-auth.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth_allow_cleartext = yes&lt;br /&gt;
auth_username_format = %{user|username|lower}&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login&lt;br /&gt;
&lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
!include auth-system.conf.ext&lt;br /&gt;
!include auth-passwdfile.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-mail.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# den parameter mail_location gibt es nicht mehr&lt;br /&gt;
mail_driver = maildir&lt;br /&gt;
mail_home = /var/mail/%{user | domain}/%{user}&lt;br /&gt;
mail_path = %{home}&lt;br /&gt;
namespace inbox { &lt;br /&gt;
 inbox = yes &lt;br /&gt;
} &lt;br /&gt;
mail_uid = 4000 &lt;br /&gt;
mail_gid = 4000&lt;br /&gt;
mail_privileged_group = mail&lt;br /&gt;
protocol !indexer-worker {&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-master.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service imap-login { &lt;br /&gt;
  inet_listener imaps { &lt;br /&gt;
   port = 993 &lt;br /&gt;
   ssl = yes &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service auth { &lt;br /&gt;
 unix_listener auth-userdb { &lt;br /&gt;
   user = vmail  &lt;br /&gt;
   group = vmail &lt;br /&gt;
 } &lt;br /&gt;
 unix_listener /var/spool/postfix/private/auth { &lt;br /&gt;
   mode = 0666 &lt;br /&gt;
   user = postfix &lt;br /&gt;
   group = postfix &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service stats { &lt;br /&gt;
	unix_listener stats-reader { &lt;br /&gt;
               user = vmail &lt;br /&gt;
               group = vmail &lt;br /&gt;
               mode = 0660 &lt;br /&gt;
     } &lt;br /&gt;
&lt;br /&gt;
	unix_listener stats-writer { &lt;br /&gt;
       user = vmail &lt;br /&gt;
       group = vmail &lt;br /&gt;
       mode = 0660 &lt;br /&gt;
   	} &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-ssl.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# (z.B. Postfix certs verwenden)&lt;br /&gt;
ssl = yes&lt;br /&gt;
ssl_server_cert_file = /etc/dovecot/private/dovecot.pem&lt;br /&gt;
ssl_server_key_file = /etc/dovecot/private/dovecot.key&lt;br /&gt;
ssl_min_protocol = TLSv1.2&lt;br /&gt;
&lt;br /&gt;
#ssl_client_ca_dir = /etc/ssl/certs &lt;br /&gt;
#ssl_dh = &amp;lt;/usr/share/dovecot/dh.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Zertifikate generieren: s. https://wiki.nomorebluescreen.de/index.php?title=Webserver_mit_Apache#Alternative_letsencrypt&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
Jedes Mal, wenn das Zertifikat ausgetauscht wird, muss der dovecot-Service neu gestartet werden, damit das neue Zertifikat auch eingelesen wird.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Überprüfen, welche Dateien angefasst wurden:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
diff -quw conf.d.orig conf.d&lt;br /&gt;
Files conf.d.orig/10-ssl.conf and conf.d/10-ssl.conf differ&lt;br /&gt;
Files conf.d.orig/15-lda.conf and conf.d/15-lda.conf differ&lt;br /&gt;
Files conf.d.orig/20-imap.conf and conf.d/20-imap.conf differ&lt;br /&gt;
Files conf.d.orig/20-managesieve.conf and conf.d/20-managesieve.conf differ&lt;br /&gt;
Files conf.d.orig/90-sieve.conf and conf.d/90-sieve.conf differ&lt;br /&gt;
Files conf.d.orig/auth-passwdfile.conf.ext and conf.d/auth-passwdfile.conf.ext differ&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ausgabe der gesamten Config&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
 # doveconf -n&lt;br /&gt;
&lt;br /&gt;
==== Sieve ====&lt;br /&gt;
Engine zum Filtern von Emails&lt;br /&gt;
&lt;br /&gt;
dovecot-sieve und dovecot-managesieved  installieren&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;15-lda.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lda_mailbox_autocreate = yes&lt;br /&gt;
lda_mailbox_autosubscribe = yes&lt;br /&gt;
protocol lda { &lt;br /&gt;
 mail_plugins = $mail_plugins sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;20-managesieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
protocols = $protocols sieve&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;90-sieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
plugin { &lt;br /&gt;
 sieve = file:~/sieve;active=~/.dovecot.sieve &lt;br /&gt;
 sieve_default = /var/lib/dovecot/sieve/default.sieve &lt;br /&gt;
 sieve_global_dir = /var/lib/dovecot/sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Kontrolle, ob der sieve-Service läuft und auf Port 4190 lauscht.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# service dovecot restart&lt;br /&gt;
# ss -plnt | grep 4190&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Da der User i.a. nicht direkt auf dem Emailserver sein /home mit den Sieve-Regeln editieren kann, erfolgt die weitere Konfiguration im Email-Client (s.u.).&lt;br /&gt;
&lt;br /&gt;
Achtung Bug:&amp;lt;br&amp;gt;&lt;br /&gt;
Da sieve/dovecot die Variable %n in der users-Datei nicht interpretiert, sollte man diese dort nicht verwenden. Somit kann es passieren, dass von roundcube ein Verzeichnis &#039;&#039;%n&#039;&#039; angelegt wird, in dem sich eine gemeinsame sieve config für &#039;&#039;&#039;alle&#039;&#039;&#039; User befindet.&lt;br /&gt;
&lt;br /&gt;
=== Transport von Postfix zu Dovecot ===&lt;br /&gt;
&lt;br /&gt;
Dem Postfix muss noch beigebracht werden, dass die Emails zum Dovecot gehen.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;master.cf&#039;&#039;&#039; im Postfix anpassen (die Einträge in den {} gehören so, nicht ersetzen!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail    &lt;br /&gt;
 argv=/usr/lib/dovecot/deliver -a ${recipient} -f ${sender} -d $ &lt;br /&gt;
 {user} @${nexthop} -m ${extension}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
und einen mailbox_command Eintrag in der main.cf vornehmen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mailbox_command = /usr/lib/dovecot/dovecot-lda -f &amp;quot;$SENDER&amp;quot; -a &amp;quot;$RECIPIENT&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(https://doc.dovecot.org/configuration_manual/howto/dovecot_lda_postfix/#howto-dovecot-lda-postfix)&lt;br /&gt;
&lt;br /&gt;
Danach noch postfix und dovecot service restarten.&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
Logfiles gehen ebenfalls (wie postfix) nach /var/log/mail.log &amp;lt;br&amp;gt;&lt;br /&gt;
Nützlicher Alias:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
maillog=&#039;tail -f /var/log/maillog&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debugging einschalten:&lt;br /&gt;
 mail_debug = yes&lt;br /&gt;
in der Datei &lt;br /&gt;
&#039;&#039;10-logging.conf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Protipp:&#039;&#039;&#039;&lt;br /&gt;
Wenn im Log folgender Fehler erscheint:&lt;br /&gt;
&lt;br /&gt;
 Mar 27 08:03:56 aphantopus postfix/pipe[2317]: 521066005D: to=&amp;lt;sunflower@example.de&amp;gt;, relay=dovecot, delay=0.3, delays=0.19/0.04/0/0.07, dsn=2.0.0, status=sent (delivered via dovecot service (lda(sunflower@example.de,)Error: net_connect_unix(/var/run/dovecot/stats-writer) failed: Permission denied))&lt;br /&gt;
&lt;br /&gt;
=&amp;gt; In der &#039;&#039;10-master.conf&#039;&#039; &#039;&#039;&#039;stats&#039;&#039;&#039; für User vmail erlauben (s.o.)&lt;br /&gt;
&lt;br /&gt;
= Roundcube =&lt;br /&gt;
&lt;br /&gt;
Praktisches Webfrontend zum Abholen und Verschicken von Emails&lt;br /&gt;
&lt;br /&gt;
Erst mysql-server installieren, sonst bricht die Installation mit einem Fehler ab&lt;br /&gt;
 # apt install mariadb-server roundcube&lt;br /&gt;
Die dbconfig-common Frage mit „yes“ beantworten, mysql-Passwort setzen.&lt;br /&gt;
Config Datei anpassen (&#039;&#039;/etc/roundcube/config.inc.php&#039;&#039;):&lt;br /&gt;
 $config[&#039;smtp_server&#039;] = &#039;localhost&#039;;&lt;br /&gt;
 $config[&#039;smtp_port&#039;] = 25;&lt;br /&gt;
&lt;br /&gt;
== Plugins ==&lt;br /&gt;
&lt;br /&gt;
Standard-Plugings installieren&lt;br /&gt;
 # apt install roundcube-plugins&lt;br /&gt;
&lt;br /&gt;
Weitere Plugins installieren:&lt;br /&gt;
 # apt install roundcube-plugins-extra git curl composer&lt;br /&gt;
(composer braucht man für die Installation von Plugins, git, weil die meisten aus github kommen)&amp;lt;br&amp;gt;&lt;br /&gt;
Die, die man haben will, in der Datei &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; enablen&lt;br /&gt;
&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
        &#039;compose-addressbook&#039;, &lt;br /&gt;
        &#039;markasjunk2&#039;, &lt;br /&gt;
        &#039;fail2ban&#039; &lt;br /&gt;
 );&lt;br /&gt;
&lt;br /&gt;
Übersicht über die offiziellen Plugins:&lt;br /&gt;
&lt;br /&gt;
https://plugins.roundcube.net/&lt;br /&gt;
&lt;br /&gt;
Plugins, die es nicht als Paket gibt:&amp;lt;br&amp;gt;&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
* README lesen&lt;br /&gt;
* Plugin als zip herunterladen, nach &#039;&#039;/usr/share/roundcube/plugins&#039;&#039; entpacken&lt;br /&gt;
* (evtl. umbenennen)&lt;br /&gt;
* &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; bearbeiten:&lt;br /&gt;
Abschnitt&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
suchen und fehlendes Plugin ergänzen&lt;br /&gt;
&lt;br /&gt;
== Filter Plugin for Sieve ==&lt;br /&gt;
&lt;br /&gt;
Achtung, nicht das Plugin „filter“ verwenden, sondern &#039;&#039;&#039;managesieve&#039;&#039;&#039; (ist Bestandteil des roundcube-plugins Paketes)&lt;br /&gt;
&lt;br /&gt;
Eine Anleitung gibt es hier:&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-code-in-roundcube/ &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-in-roundcube/&lt;br /&gt;
&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Den protocols Parameter nicht in der dovecot.conf editieren, sondern in &lt;br /&gt;
&#039;&#039;20-managesieve.conf&#039;&#039; (s.o.):&lt;br /&gt;
&lt;br /&gt;
 protocols = $protocols sieve&lt;br /&gt;
&lt;br /&gt;
Nun kann man über das Webfrontend Sieve-Filterregeln generieren&lt;br /&gt;
&lt;br /&gt;
Achtung Bug:&amp;lt;br&amp;gt;&lt;br /&gt;
Sieve legt ein sieve-Verzeichnis unter dem Verzeichnis an, das in mail_location definiert ist. Wenn man die emails der User unter &#039;&#039;/var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&#039;&#039; ablegen möchte, wird man folgendes konfigurieren:&lt;br /&gt;
&lt;br /&gt;
 mail_location = maildir:/var/mail/%d/%n&lt;br /&gt;
&lt;br /&gt;
Da dovecot aber %d nicht interpretiert (s.o.), liegt das User maildirectory unter /var/mail/&amp;lt;username&amp;gt;. Sieve interpretiert dagegen %n nicht und legt ein Directory /var/mail/&amp;lt;domain&amp;gt;/%n/sieve an, unter der die roundcube.sieve Datei liegt. Somit greifen alle User auf dieselbe Datei zu, was technisch möglich, securitytechnisch aber fatal ist. Leider keine gute Idee zur Abhilfe bekannt.&lt;br /&gt;
&lt;br /&gt;
== Passwort ändern ==&lt;br /&gt;
Um den Usern die Möglichkeit zu geben, ihr Passwort selbst zu ändern, wird in der &#039;&#039;config.inc.php&#039;&#039; das Plugin enabled:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$config[&#039;plugins&#039;] = array(&lt;br /&gt;
 (...)&lt;br /&gt;
 &#039;password&#039;&lt;br /&gt;
);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Weitere Einstellungen, wenn die User in einem Passwortfile gepflegt werden wie im Kapitel &#039;&#039;&#039;Dovecot&#039;&#039;&#039; beschrieben:&amp;lt;br&amp;gt;&lt;br /&gt;
(wir gehen davon aus, dass die Userpasswörter mit sha512 verschlüsselt werden, s.o.)&lt;br /&gt;
&lt;br /&gt;
 # https://stackoverflow.com/questions/62655236/how-to-enable-password-plugin-on-roundcube&lt;br /&gt;
 $config[&#039;password_algorithm&#039;] = &#039;ssha512&#039;;&lt;br /&gt;
 $config[&#039;password_algorithm_prefix&#039;] = &#039;{SSHA512}&#039;;&lt;br /&gt;
 $config[&#039;password_driver&#039;] = &#039;dovecot_passwdfile&#039;;&lt;br /&gt;
 $config[&#039;password_dovecot_passwdfile_path&#039;] = &#039;/etc/dovecot/users&#039;;&lt;br /&gt;
&lt;br /&gt;
Die users Datei vom dovecot muss dann entsprechend für www-data les- und schreibbar sein:&lt;br /&gt;
 -rw-rw---- 1 dovecot www-data 1240 Dec  2 23:20 /etc/dovecot/users&lt;br /&gt;
&lt;br /&gt;
(Achtung, riskant bei eventueller Kompromittierung des Webservers! Als Alternative überlegen, die dovecot-Passwörter in eine [mysql-]DB auszulagern)&lt;br /&gt;
&lt;br /&gt;
== Identities ändern ==&lt;br /&gt;
&lt;br /&gt;
Normalerweise kann ein User nur mit seiner Absenderadresse senden. Das ist eine sinnvolle Einstellung, aber wer das Feature zu Testzwecken abschalten will, kann folgende Einstellung vornehmen:&lt;br /&gt;
 $config[&#039;identities_level&#039;] = 0;&lt;br /&gt;
Nun kann der User über &amp;quot;Einstellungen&amp;quot; weitere Absender hinzufügen (https://www.servercake.blog/multiple-identities-roundcube/)&lt;br /&gt;
&lt;br /&gt;
(Leider bisher keine Möglichkeit gefunden, dies nur auf (einen) bestimmte(n) User einzuschränken)&lt;br /&gt;
&lt;br /&gt;
== Apache Integration ==&lt;br /&gt;
&lt;br /&gt;
Hier eine Beispielkonfiguration für einen Virtual Host, um die Roundcube-Seite unter https://mail.example.de zu erreichen.&lt;br /&gt;
Weiteres im Kapitel [[Webserver mit Apache|apache]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;VirtualHost *:443&amp;gt; &lt;br /&gt;
    ServerName mail.example.de &lt;br /&gt;
    ServerAdmin postmaster@example.de &lt;br /&gt;
&lt;br /&gt;
    SSLEngine on &lt;br /&gt;
    SSLCertificateFile         /var/lib/dehydrated/certs/mail.example.de/fullchain.pem &lt;br /&gt;
    SSLCertificateKeyFile /var/lib/dehydrated/certs/mail.example.de/privkey.pem &lt;br /&gt;
&lt;br /&gt;
    DocumentRoot /usr/share/roundcube &lt;br /&gt;
&lt;br /&gt;
    # Includes &lt;br /&gt;
    Include /etc/apache2/conf-available/ssl-encryption.conf &lt;br /&gt;
      &lt;br /&gt;
    LogFormat &amp;quot;%h %l %u %t \&amp;quot;%r\&amp;quot; %&amp;gt;s %b \&amp;quot;%{Referer}i\&amp;quot; \&amp;quot;%{User-agent}i\&amp;quot;&amp;quot; vhost_combined &lt;br /&gt;
    CustomLog /var/log/apache2/mail-ssl.log combined &lt;br /&gt;
    ErrorLog /var/log/apache2/mail-ssl-error.log &lt;br /&gt;
&amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Das roundcube-Paket bringt zudem noch eine roundcube.conf mit, die unter /etc/apache2/conf-available/roundcube.conf installiert und aktiviert wird.&lt;br /&gt;
&lt;br /&gt;
=== PHP ===&lt;br /&gt;
Damit der Roundcube überhaupt läuft, muss das php Plugin installiert und aktiviert sein. Passiert unter Debian mittels:&lt;br /&gt;
 # apt install libapache2-mod-php&lt;br /&gt;
&lt;br /&gt;
Etwas performanter ist die Verwendung von &#039;&#039;&#039;php-fpm&#039;&#039;&#039; (https://www.zend.com/blog/apache-phpfpm-modphp).&lt;br /&gt;
 # apt install php-fpm&lt;br /&gt;
&lt;br /&gt;
Der default Upload bei PHP sind dürftige 2 MB. Um diesen z.B. auf 50MB raufzudrehen, muss folgende Datei angefasst werden:&lt;br /&gt;
&amp;lt;pre&amp;gt;VERSION=$(php -v| head -n 1 | awk &#039;{ print $2 }&#039; | sed -e &#039;s|.[[:digit:]]*$||&#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
* modphp:&lt;br /&gt;
 /etc/php/${VERSION}/apache2/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
* php-fpm:&lt;br /&gt;
 /etc/php/${VERSION}/fpm/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
&lt;br /&gt;
= Integration in einen MUA =&lt;br /&gt;
Wer nicht über den (langsamen) Webmailer gehen will, kann natürlich auch einen MUA seiner Wahl verwenden. Hier ein Beispiel.&lt;br /&gt;
&lt;br /&gt;
== Thunderbird==&lt;br /&gt;
&lt;br /&gt;
Einstellungen für Outgoing Server (SMTP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Servername: FQDN des Email-Servers&lt;br /&gt;
Port: 25&lt;br /&gt;
Connection Security: STARTTLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Beim 1. Mal wird man nach seinem Mailbox-Passwort gefragt. Dieses eingeben und speichern.&lt;br /&gt;
&lt;br /&gt;
Server Settings (IMAP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Server Type: IMAP Mail Server&lt;br /&gt;
Server Name: FQDN des Email-Servers&lt;br /&gt;
Port: 993&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
Connection Security: SSL/TLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
(Die restlichen Defaults so belassen oder bei Bedarf anpassen)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot thunderbird1.png|900px]]&lt;br /&gt;
[[File:Screenshot thunderbird2.png|900px]]&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=671</id>
		<title>Emailserver mit Postfix und Dovecot</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=671"/>
		<updated>2026-03-18T22:20:26Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Dateirechte */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Postfix =&lt;br /&gt;
&lt;br /&gt;
Postfix ist ein MTA (Mail Transfer Agent), der eine gute Alternative zu anderen gängigen  MTAs (Sendmail, Exim) darstellt, da seine Konfiguration gut lesbar ist. &lt;br /&gt;
In unserem Beispiel soll der MTA mit einem IMAPd (Dovecot) verknüpft werden, so dass  Benutzer eine Mailbox direkt auf dem System haben. Das Abholen der Mails erfolgt per IMAPs.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Zunächst muss das Paket „postfix“ installiert werden.&lt;br /&gt;
Dabei sind noch ein paar Fragen zu beantworten:&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Art des Servers:	&#039;&#039;&#039;Internet Site&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Root and postmaster mail recipient:&#039;&#039;&#039; ein Postfach eintragen, z.B.postmaster@example.de &amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Other destinations to accept mail for (blank for none):&#039;&#039;&#039; z.B. mail.example.de, localhost, $mydomain (kann man erstmal die defaults belassen)&lt;br /&gt;
* &#039;&#039;&#039;Force synchronous updates on mail queue:&#039;&#039;&#039; no&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Local subnets:&#039;&#039;&#039; 127.0.0.1/8, 192.168.63.0/24 (hier das eigene Netz ergänzen)&lt;br /&gt;
* &#039;&#039;&#039;Mailbox size limit:&#039;&#039;&#039; 0 (unbegrenzt)&lt;br /&gt;
* &#039;&#039;&#039;Local address extension character:&#039;&#039;&#039;  + (i.a. als default ausreichend)&lt;br /&gt;
* &#039;&#039;&#039;Internet protocols to use:&#039;&#039;&#039;  all (wenn man nicht explizit ipv4 oder ipv6 sprechen will)&lt;br /&gt;
&lt;br /&gt;
Diese Einstellungen lassen sich jederzeit mit&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # dpkg-reconfigure postfix&amp;lt;/console&amp;gt;&lt;br /&gt;
ändern.&lt;br /&gt;
&lt;br /&gt;
Alle relevanten Dateien befinden sich  im Verzeichnis &#039;&#039;/etc/postfix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Bevor wir zur Postfixconfig kommen, überprüfen wir den Inhalt der Datei /etc/mailname:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 $ cat /etc/mailname&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Dort darf &#039;&#039;&#039;nur der Domainname&#039;&#039;&#039; stehen, nicht der Hostname (e.g. example.com). Andernfalls kann das Auswirkungen auf den Emailversand haben, v.a. wenn in der main.cf (s.u.) auf die Datei referenziert wird.&lt;br /&gt;
&lt;br /&gt;
Die wichtigeste Datei zum Anpassen ist zunächst die &#039;&#039;&#039;main.cf&#039;&#039;&#039;. Hier ein Beispiel für den Server „mx“ in der Domain example.de. Folgende Parameter sollten konfiguriert sein (exemplarisch):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 myhostname&lt;br /&gt;
 mydomain&lt;br /&gt;
 myorigin&lt;br /&gt;
 mydestination&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Meistens gibt es schon ein paar brauchbare defaults. Der Parameter &#039;&#039;mynetworks&#039;&#039; erlaubt es bestimmten Netzen, Emails ohne weitere Einschränkungen einzuliefern.&lt;br /&gt;
&lt;br /&gt;
Beispielconfig:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
myhostname = mx01.example.de &lt;br /&gt;
mydomain = example.de &lt;br /&gt;
myorigin = $mydomain &lt;br /&gt;
mydestination = $myhostname, localhost, localhost.$mydomain&lt;br /&gt;
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.99.0/24 [2001:780:11b::/48] 214.94.24.154 [2004:780:8:0:5ff0:c5ff:fe09:98f9]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vor allem der Parameter &#039;&#039;&#039;mynetworks&#039;&#039;&#039; sollte mit Bedacht gewählt werden, denn dies sind alle Hosts und Netze, von denen jegliche Emails angenommen werden (auch von &#039;&#039;und&#039;&#039; nach Fremddomains). Fehlkonfigurationen führen hier schnell zum [https://practical365.com/what-is-an-open-relay OpenRelay].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Beachte:&#039;&#039;&#039; ipv6-Adressen müssen in [eckigen Klammern] geschrieben werden, sonst werden keine Emails ausgeliefert!&lt;br /&gt;
Fehler im Logfile:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
postfix/smtpd[21928]: warning: smtpd_client_event_limit_exceptions: 2a01:40f8:c013:5050::0/64: table lookup problem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Nach jeder Änderung ist der Dienst zu reloaden mit dem Befehl&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ob der Restart ordentlich funktioniert hat, kann man z.B. anhand des Logs überprüfen:&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # tail /var/log/mail.log&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== master.cf ===&lt;br /&gt;
Das Kernstück des Postfixdaemons. Hier werden die Transports festgelegt&lt;br /&gt;
Bedeutung der Spalten:&lt;br /&gt;
* service-Feld: Name des Dienstes (smtp, local, procmail, ...) (str)&lt;br /&gt;
* typ-Feld: Verbindungstyp (inet, fifo, unix) (str)&lt;br /&gt;
* Zugriffsrecht: Zugriff auch für externe Programme (default: y) (bool)&lt;br /&gt;
* unpriv-Feld:  Start als unprivilegierter Benutzer (default: y) oder root (n) (bool)&lt;br /&gt;
* chroot: Soll der Dienst in einer chroot-Umgebung gestartet werden (default: y) (bool)&lt;br /&gt;
* wakeup-Feld: Sekunden zwischen 2 Aufrufen (default: 0) (int)&lt;br /&gt;
* Prozessmaximum: Wie viele Prozesse maximal gleichzeitig (default: 50) (int)&lt;br /&gt;
Danach erfolgt ein Kommando mit Flags und Parametern (optional).&lt;br /&gt;
&lt;br /&gt;
== TLS ==&lt;br /&gt;
Optional kann man mit Zertifikaten verschlüsselte Übertragung von Emails konfigurieren. das funktioniert aber nur dann, wenn der Mailserver der Gegenstelle das Zertifikat auch einbindet. Man kann das Zertifikat auch in einen Mailclient einbinden (s. später).&lt;br /&gt;
Die Zertifikatserzeugung kann mit mit [https://letsencrypt.org letsencrypt] erfolgen. Clients zur Zertifikatserzeugung sind [[Webserver_mit_Apache#Alternative_letsencrypt | certbot ]] oder [[Webserver_mit_Apache#Dehydrated | dehydrated]].&lt;br /&gt;
&lt;br /&gt;
=== Zertifikatsgenerierung in Kürze ===&lt;br /&gt;
 # echo $HOSTNAME &amp;gt; /etc/dehydrated/domains.txt&lt;br /&gt;
 # dehydrated –register –accept-terms&lt;br /&gt;
 # dehydrated -c&lt;br /&gt;
&lt;br /&gt;
Dies setzt allerdings einen [[Webserver_mit_Apache | Webserver]] voraus, der auf Port 80 lauscht. Gibt es diesen nicht, kann mal alternativ letsencrypt via DNS verwenden (https://letsencrypt.org/docs/challenge-types).&lt;br /&gt;
&lt;br /&gt;
=== Alternative eigene CA (nicht empfohlen) ===&lt;br /&gt;
&lt;br /&gt;
Wer unbedingt eine eigene CA betreiben will, kann das mit folgender Anleitung tun. Achtung: Das Vorgehen sollte nur gewählt werden, wenn ein zwingender Grund dafür besteht. Viele Browser und MUAs haben Probleme damit, erzeugen hässliche Warnings oder lassen die Seite nicht zu.&lt;br /&gt;
&lt;br /&gt;
==== CA erstellen ====&lt;br /&gt;
Wenn noch kein Zertifikat vorhanden ist, kann man sich selbst eines erstellen oder einen CSR (Certificate Signing Request) erstellen und diesen an eine offizielle CA schicken. Soll ein kommerziell genutzter Mailserver entstehen, ist dies der realistische Weg.&lt;br /&gt;
&lt;br /&gt;
Achtung: Dieser Schritt wird nicht gebraucht, wenn es schon eine CA gibt.&lt;br /&gt;
&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
Key erstellen (+Passwort dafür vergeben), Zertifkatsrequest für die CA erstellen, CA erstellen&lt;br /&gt;
&lt;br /&gt;
 $ openssl genrsa -out ca.key -des3 4096&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -x509 -days 3650 -key ca.key -out ca.crt&lt;br /&gt;
&lt;br /&gt;
 Enter pass phrase for ca.key: &lt;br /&gt;
 You are about to be asked to enter information that will be incorporated &lt;br /&gt;
 into your certificate request. &lt;br /&gt;
 What you are about to enter is what is called a Distinguished Name or a DN. &lt;br /&gt;
 There are quite a few fields but you can leave some blank &lt;br /&gt;
 For some fields there will be a default value, &lt;br /&gt;
 If you enter &#039;.&#039;, the field will be left blank. &lt;br /&gt;
 ----- &lt;br /&gt;
 Country Name (2 letter code) [AU]:DE &lt;br /&gt;
 State or Province Name (full name) [Some-State]:Bavaria &lt;br /&gt;
 Locality Name (eg, city) []:Nuernberg &lt;br /&gt;
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:example.de &lt;br /&gt;
 Organizational Unit Name (eg, section) []:Hostmaster &lt;br /&gt;
 Common Name (e.g. server FQDN or YOUR name) []:*.example.de &lt;br /&gt;
 Email Address []:postmaster@example.de&lt;br /&gt;
&lt;br /&gt;
==== Zertifikat mit der neuen CA erstellen ====&lt;br /&gt;
Schritte:&lt;br /&gt;
    • Key erstellen&lt;br /&gt;
    • Request erstellen &lt;br /&gt;
    • Zertifikat erstellen und signen&lt;br /&gt;
&lt;br /&gt;
  $ openssl genrsa -out mx.example.de.key 4096&lt;br /&gt;
(kein Passwort festlegen)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -key mx.example.de.key -out mx.example.de.csr&lt;br /&gt;
(wieder das Formular ausfüllen as usual)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl x509 -req -days 365 -in mx.example.de.csr -CA  ca.crt -CAkey ca.key -set_serial 01 -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
Beim Erneuern des Zertifikats fallen die Schritte „Erstellen der CA“ und Erzeugen des Keys weg. Ferner muss man auch keine Serial mehr angeben.&lt;br /&gt;
Der Renew-Befehl lautet also folgendermaßen:&lt;br /&gt;
 $ openssl x509 -req -days 730 -in mx.example.de.csr -CA ca.crt -CAkey ca.key -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
=== Einbinden in die Config-Datei ===&lt;br /&gt;
Dieser Schritt gilt wieder für alle Zertifikate, egal wie sie erzeugt wurden. Die Pfade müssen natürlich entsprechend angepasst werden,&lt;br /&gt;
&lt;br /&gt;
Zertifikate an die entsprechende Stelle kopieren und in der Konfig einbinden:&lt;br /&gt;
&lt;br /&gt;
 smtpd_tls_cert_file=/etc/ssl/certs/mx.example.de.crt &lt;br /&gt;
 smtpd_tls_key_file=/etc/ssl/private/mx.example.de.key &lt;br /&gt;
 smtpd_tls_CAfile=/etc/postfix/ca.crt &lt;br /&gt;
 smtpd_use_tls=yes &lt;br /&gt;
 smtpd_tls_session_cache_database = btree: &lt;br /&gt;
   ${data_directory}/smtpd_cache&lt;br /&gt;
 smtp_tls_session_cache_database = btree:  &lt;br /&gt;
   ${data_directory}/smtp_scache&lt;br /&gt;
&lt;br /&gt;
Der Parameter smtpd_tls_Cafile entfällt, wenn eine offizielle CA vorhanden ist (z.B. letsencrypt).&amp;lt;br&amp;gt;&lt;br /&gt;
Die Pfade zu den Zertifikaten können abweichen, bei letsencrypt liegen diese z.B. unter &#039;&#039;/var/lib/dehydrated/certs/&#039;&#039;.&lt;br /&gt;
Überprüfung:&lt;br /&gt;
 $ openssl s_client -connect mx.example.de:25 -starttls smtp&lt;br /&gt;
oder https://www.checktls.com/&lt;br /&gt;
&lt;br /&gt;
Ergänzung:&lt;br /&gt;
Manche MTAs wollen ein Zertifikat in pfx-Form. Dieses kann man aus dem cert file wie folgt erzeugen:&lt;br /&gt;
 $ openssl pkcs12 -export -out mx.example.de.pfx -inkey mx.example.de.key -in mx.example.de.crt&lt;br /&gt;
pfx-File und Passwort dem User zukommen lassen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
[[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]] „vergisst“ manchmal das neue Zertifikat und behält die alte Version, d.h. die meisten Mailclients spoolen dann keine neuen Emails mehr. In diesem Fall den Dovecot Service neu starten.&lt;br /&gt;
&lt;br /&gt;
== SASL ==&lt;br /&gt;
&lt;br /&gt;
Zur Vermeidung eines Open Relays ist dringend anzuraten, per default nur das Einliefern mit dem Absender @example.de von bestimmten Netzen zu erlauben. Dieses passiert mit dem Parameter &#039;&#039;mynetworks&#039;&#039; (s.o.).&lt;br /&gt;
Nun kann es natürlich passieren, dass Benutzer von einem Mailclient irgendwo im Internet Mails verschicken wollen. Diese wären laut Konfig nicht berechtigt. Da die meisten PCs mit dynamischen Adressen im Internet unterwegs sind, macht es hier auch keinen Sinn, die jeweilige IP-Adresse in der Konfig zu ergänzen.&amp;lt;br&amp;gt;&lt;br /&gt;
Das Problem kann umgangen werden, indem Emails versenden dann erlaubt wird, wenn sich der Benutzer einmal erfolgreich am IMAP-Server authentifiziert hat.&lt;br /&gt;
Hierfür gibt es SASL. Die entsprechenden Eintragungen in der main.cf sind:&lt;br /&gt;
 smtpd_relay_restrictions = permit_mynetworks  &lt;br /&gt;
 permit_sasl_authenticated defer_unauth_destination&lt;br /&gt;
 smtpd_sasl_auth_enable = yes &lt;br /&gt;
 smtpd_sasl_type = dovecot &lt;br /&gt;
 smtpd_sasl_local_domain = $mydomain &lt;br /&gt;
 smtpd_sasl_security_options = noanonymous &lt;br /&gt;
 smtpd_sasl_path = private/auth&lt;br /&gt;
&lt;br /&gt;
Im Mailclient äußert sich das Verhalten so, dass man beim ersten Senden einer Nachricht sein Mailbox-Passwort angeben muss.&lt;br /&gt;
Bevor dieses Feature aktiviert wird, muss es einen IMAP-Server geben (s.  [[#Dovecot|nächstes Kapitel]] ).&lt;br /&gt;
&lt;br /&gt;
== Maps ==&lt;br /&gt;
&lt;br /&gt;
Um besser unterscheiden zu können, was mit welchen Absender-/Zieladressen passiert, wird die Konfiguration in sogenannte „Maps“ aufgeteilt. Diese können als Klartext-File oder als Berkley DB vorliegen. In letzterem Fall müssen diese mit dem Kommando &#039;&#039;&#039;postmap&#039;&#039;&#039; nach jeder Bearbeitung umgewandelt werden.&amp;lt;br&amp;gt;&lt;br /&gt;
Ausnahme: Die Datei /etc/aliases.db (nur relevant für lokale Emailauslieferung) wird mit dem Kommando &#039;&#039;&#039;postalias&#039;&#039;&#039; oder &#039;&#039;&#039;newaliases&#039;&#039;&#039; generiert.&lt;br /&gt;
 alias_maps = hash:/etc/aliases &lt;br /&gt;
 alias_database = hash:/etc/aliases &lt;br /&gt;
Hier werden aliase eingerichtet, die auf eine andere Mailbox mappen. Beispiel:&lt;br /&gt;
 postmaster:    root&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
 smtpd_sender_restrictions = hash:/etc/postfix/access &lt;br /&gt;
Hier können für Aktionen für spezielle Absenderadressen eingerichtet werden. Beispiel:&lt;br /&gt;
 example.com     DISCARD&lt;br /&gt;
&lt;br /&gt;
=== Relocated ===&lt;br /&gt;
 relocated_maps = hash:/etc/postfix/relocated &lt;br /&gt;
&lt;br /&gt;
Abweisen der Mail mit einem Hinweis. Beispiel:&lt;br /&gt;
 testy.test      &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Ergebnis:&lt;br /&gt;
 &amp;lt;testy.test@example.de&amp;gt;: Recipient address rejected: User has &lt;br /&gt;
 moved to &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de&lt;br /&gt;
 senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Canonical ===&lt;br /&gt;
==== Sender ====&lt;br /&gt;
 sender_canonical_maps = hash:/etc/postfix/sender_canonical &lt;br /&gt;
&lt;br /&gt;
Bestimmte Adressen werden auf ein übliches Standardformat umgeschrieben:&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   petra.sonne@example.de&lt;br /&gt;
 phun@work.de		   peter.hun@example.de&lt;br /&gt;
&lt;br /&gt;
==== Recipient ====&lt;br /&gt;
 recipient_canonical_maps = hash:/etc/postfix/recipient_canonical &lt;br /&gt;
Arbeitet genauso wie sender_canonical, nur für Empfängeradressen.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Mailbox ===&lt;br /&gt;
 virtual_mailbox_maps = hash:/etc/postfix/virtual&lt;br /&gt;
Locations der Mailboxen des imap-Servers (näheres unter [[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]])&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   	example.de/sunflower/ &lt;br /&gt;
 testy@example.de		example.de/testy &lt;br /&gt;
 test@example.de		example.de/test &lt;br /&gt;
 lmaa@ihr-koennt-mich-alle.de	ihr-koennt-mich-alle.de/lmaa&lt;br /&gt;
&lt;br /&gt;
=== Virtual Aliases ===&lt;br /&gt;
virtual_alias_maps = hash:/etc/postfix/virtual_maps&lt;br /&gt;
&lt;br /&gt;
Adressen die auf andere Adressen umgebogen werden (ähnlich wie die aliases), kann auch domainübergreifend passieren.&lt;br /&gt;
So können mehrere Empfängeradressen in dieselbe Mailbox laufen.&lt;br /&gt;
&lt;br /&gt;
 anrufbeantworter@example.de      sunflower@example.de,H.Hirsch@gmx.de,harry1999@yahoo.de&lt;br /&gt;
 info@example.de        sunflower@example.de&lt;br /&gt;
 postmaster    root &lt;br /&gt;
 webmaster     root &lt;br /&gt;
 administrator root&lt;br /&gt;
 root    sunflower&lt;br /&gt;
fortune:  fortune&lt;br /&gt;
Letzteres ist eine Pipe. Dazu später mehr.&lt;br /&gt;
&lt;br /&gt;
Umwandeln von Text in DB-File und in Postfix einlesen:&lt;br /&gt;
 postmap &amp;lt;aliases|access|canonical|...&amp;gt;&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Einfaches Beispiel: Emails von einer Domain auf eine andere weiterleiten ===&lt;br /&gt;
Nehmen wir an, wir haben einen Emailserver1 in der Domain example.com. Dieser soll alle Email die an &amp;lt;userXY&amp;gt;@example.com eintreffen, an &amp;lt;userXY&amp;gt;@example.de weiterleiten. Auch hier ist eine Eintrag in der o.g. &#039;&#039;virtual_maps&#039;&#039; Datei nötig:&lt;br /&gt;
 @example.com	@example.de&lt;br /&gt;
Nun werden alle example.com-Emails zum zuständigen Emailserver für example.de weitergeleitet. Der user part bleibt unverändert.&lt;br /&gt;
&lt;br /&gt;
=== Transports ===&lt;br /&gt;
Transports sind die Art und Weise, wie eine eingehende Mail behandelt wird, z.B. lokal in eine Datei speichern, an einen imap-Server weiterreichen oder ein Script ausführen.&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel: &amp;lt;br&amp;gt;&lt;br /&gt;
Wenn auf eine bestimmte Adresse geschickt wird, soll ein Script ausgeführt werden, das dem Absender einen Zufallsspruch zurücksendet &#039;&#039;&#039;und&#039;&#039;&#039; die Mail gleichzeitig in ein Postfach einliefert.&lt;br /&gt;
Schritte:&lt;br /&gt;
&lt;br /&gt;
1. Alias definieren (virtual_maps):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de           fortune&lt;br /&gt;
&lt;br /&gt;
2. Alias auf einen Transport  mappen (transports):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de     randomphrase:&lt;br /&gt;
&lt;br /&gt;
3. Transport definieren (master.cf):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 randomphrase      unix  - n n - - pipe &lt;br /&gt;
  flags=h user=vmail:vmail argv=/usr/local/bin/randomphrase.pl ${sender}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(Den User vmail muss es natürlich in der passwd geben, z.B. so:&lt;br /&gt;
 vmail:x:4000:4000::/home/vmail:/user/sbin/nologin&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
4. Script hinterlegen:&lt;br /&gt;
 /usr/local/bin/randomphrase.pl&lt;br /&gt;
für alle ausführbar machen&lt;br /&gt;
&lt;br /&gt;
Mit dem Script [[ randomphrase.pl ]] wird ein Zufallsspruch erzeugt. Dafür muss das Paket &#039;&#039;fortune-mod&#039;&#039; installiert sein. &lt;br /&gt;
Zum Weiterschicken der Email wird das Script /usr/local/bin/deliver_mail.sh aufgerufen. ([[File:Deliver_mail.sh]])&lt;br /&gt;
&amp;lt;br&amp;gt; Hierfür muss der User vmail in der Datei &#039;&#039;/etc/sudoers.d/vmail&#039;&#039; berechtigt werden:&lt;br /&gt;
 vmail ALL=(root) NOPASSWD: /usr/local/bin/deliver_mail&lt;br /&gt;
&lt;br /&gt;
Eine Email an die Adresse fortune@example.de erzeugt nun eine Antwort an die Absenderadresse mit einem Zufallsspruch.&lt;br /&gt;
&lt;br /&gt;
== Multidomain ==&lt;br /&gt;
&lt;br /&gt;
Natürlich kann Postfix auch Emails für mehrere Domains annehmen. Dafür gibt es den Parameter „virtual_mailbox_domains“:&lt;br /&gt;
&lt;br /&gt;
 virtual_mailbox_domains = example.de  example.com  ihr-koennt-mich-alle.de&lt;br /&gt;
Die Variable $mydomain sollte dann aus mydestination entfernt werden.&lt;br /&gt;
&lt;br /&gt;
== Special DNS Records ==&lt;br /&gt;
=== SPF (Sender Policy Framework) ===&lt;br /&gt;
Mit einem RR-Type TXT kann man eine Liste von Emailservern definieren, die als Absender die Emaildomain verwenden dürfen. Generiert jemand eine Fakeemail von einem anderen System aus, kann diese  abgewiesen werden.&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 IN TXT  &amp;quot;v=spf1 mx:example.de a:foo.example.de ip4:8.15.47.11/32 ip6:2008:15:5:47::11/48 ip6:2008:15:5:47::12/48 -all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Howto: &amp;lt;br&amp;gt;&lt;br /&gt;
https://dmarcian.com/create-spf-record/ &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.open-spf.org/SPF_Record_Syntax/&lt;br /&gt;
&lt;br /&gt;
SPF in Postfix integrieren:&lt;br /&gt;
&lt;br /&gt;
Nun ist die Domain vor Missbrauch vor Fakeeemails geschützt. Jetzt gibt es aber noch die andere Seite zu beachten. Postfix soll ebenfalls die SPF-Records anderer Emaildomains prüfen und die Email ggf ablehnen.&lt;br /&gt;
https://makeityourway.de/enabling-spf-sender-policy-framework-checking-on-postfix/&lt;br /&gt;
&lt;br /&gt;
Hier in Kürze zusammengefasst, was es zu beachten gibt:&lt;br /&gt;
 # apt install postfix-policyd-spf-python&lt;br /&gt;
Die Config-Datei &#039;&#039;/etc/postfix-policyd-spf-python/policyd-spf.conf&#039;&#039; liefert bereits brauchbare Defaults, optional kann man noch eine Whitelist ergänzen z.B.&lt;br /&gt;
&lt;br /&gt;
 Domain_Whitelist = example.com &lt;br /&gt;
In der master.cf ergänzen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 policyd-spf  unix  -     n       n       -       -       spawn &lt;br /&gt;
  user=policyd-spf argv=/usr/bin/policyd-spf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In der main.cf ergänzen:&lt;br /&gt;
 smtpd_recipient_restrictions =&lt;br /&gt;
 (...)&lt;br /&gt;
 check_policy_service unix:private/policyd-spf&lt;br /&gt;
 (…)&lt;br /&gt;
&#039;&#039;&#039;Achtung:&#039;&#039;&#039; Wenn es schon einen check_policy_service Eintrag gibt, &#039;&#039;&#039;keinesfalls&#039;&#039;&#039; einen weiteren Eintrag anhängen, sondern eine neue Zeile aufmachen!&lt;br /&gt;
 policy-spf_time_limit = 3600s&lt;br /&gt;
&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&lt;br /&gt;
Ein paar Testemails einkippen und mail.log gucken.&lt;br /&gt;
&lt;br /&gt;
=== DMARC (Domain based Message Authentication, Reporting and Conformance) ===&lt;br /&gt;
https://dmarcian.com/dmarc-record/&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 _dmarc                  IN TXT      &amp;quot;v=DMARC1;p=quarantine;rua=mailto:postmaster@example.de&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In dem Fall werden verdächtige Emails in einen Quarantäne-Ordner verschoben und ein Report an den postmaster versandt.&lt;br /&gt;
Für die Integration in Postfix gibt es das Paket opendmarc.&lt;br /&gt;
Implementierung von SPF, DKIM und DMARC in Postfix:&lt;br /&gt;
&lt;br /&gt;
https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/&lt;br /&gt;
(untested)&lt;br /&gt;
&lt;br /&gt;
== Nützliche Commands ==&lt;br /&gt;
Erzeugen eines database files aus einer Textdatei:&lt;br /&gt;
 postmap &amp;lt;filename&amp;gt;&lt;br /&gt;
Alle Configparameter anzeigen:&lt;br /&gt;
  postconf&lt;br /&gt;
Konfigprüfung: &lt;br /&gt;
  postfix check&lt;br /&gt;
Mailqueue anschauen: &lt;br /&gt;
  mailq&lt;br /&gt;
Alle Messages in der Queue ausliefern: &lt;br /&gt;
  postqueue -f&lt;br /&gt;
Nur eine bestimmte Message ausliefern: &lt;br /&gt;
  postqueue -i &amp;lt;ID&amp;gt;&lt;br /&gt;
Message löschen: &lt;br /&gt;
  postsuper -d &amp;lt;ID&amp;gt;&lt;br /&gt;
Alle Messages löschen (!): &lt;br /&gt;
  postsuper -d ALL&lt;br /&gt;
Inhalt einer Message anschauen: &lt;br /&gt;
  postcat -vq &amp;lt;ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Logfile ==&lt;br /&gt;
&lt;br /&gt;
Geloggt wird nach &#039;&#039;/var/log/mail.log&#039;&#039; (alles) bzw. Errors nach &#039;&#039;/var/log/mail.err&#039;&#039; und Warnings nach &#039;&#039;/var/log/mail.warn&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;Protipp: Alias  anlegen:&lt;br /&gt;
 maillog=&#039;tail -f /var/log/mail.log&#039;&lt;br /&gt;
&lt;br /&gt;
== Greylisting und Antispam ==&lt;br /&gt;
&lt;br /&gt;
Zur Bekämpfung der Spamflut gibt es das praktische Programm &#039;&#039;&#039;„Postgrey“&#039;&#039;&#039;. Unter Debian kann dieses als Paket installiert werden. Dieses wird in die main.cf im Abschnitt smtpd_recipient_restrictions eingebunden.&lt;br /&gt;
 smtpd_recipient_restrictions = &lt;br /&gt;
  permit_mynetworks&lt;br /&gt;
  permit_sasl_authenticated&lt;br /&gt;
  permit_tls_clientcerts&lt;br /&gt;
  reject_unauth_destination&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_sender&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_recipient&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_rbl_client bl.spamcop.net&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;check_policy_service inet:127.0.0.1:10023&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
(Die Blacklist &#039;&#039;dnsbl.sorbs.net&#039;&#039; wurde hier außen vor gelassen, da diese so ziemlich alles blockt, z.B. alle yahoo- oder gmx-Adressen.)&lt;br /&gt;
Damit das funktioniert, muss natürlich noch Postgrey selbst an den Start gebracht werden.&lt;br /&gt;
Hierfür wird die Datei &#039;&#039;/etc/default/postgrey&#039;&#039; bearbeitet. Hier ein Beispiel:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;--inet=10023 --auto-whitelist-clients=8  &lt;br /&gt;
 POSTGREY_TEXT=&amp;quot;Busy. Come back in 5 minutes.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Der Service lauscht also auf Port 10023. Im obigen Beispiel wird ein Absender beim 8. erfolgreichen Zustellversuch automatisch gewhitelistet (optionaler Parameter &#039;&#039;--auto-whitelist-clients&#039;&#039;, evtl. Zahl erhöhen oder Parameter ganz weglassen).&lt;br /&gt;
&lt;br /&gt;
Anschließend wechselt man ins Verzeichnis &#039;&#039;/etc/postgrey&#039;&#039;. Dort gibt es 2 Whitelistings. Die Absender stehen in &#039;&#039;&#039;whitelist_clients&#039;&#039;&#039;. Dort stehen bereits IPs und Domains diverser Provider. Man kann dort selbst Einträge hinzufügen (z.B. example.ch).&lt;br /&gt;
&lt;br /&gt;
In der Datei &#039;&#039;&#039;whitelist_recipients&#039;&#039;&#039; kann man alle Empfänger der eigenen Domain eintragen, die auf jeden Fall immer Emails bekommen sollen. z.B. postmaster@, abuse@.&amp;lt;br&amp;gt;&lt;br /&gt;
Beachte: &#039;&#039;&#039;Die Dateien müssen explizit eingesourcet werden&#039;&#039;&#039;, passiert nicht automatisch. Das macht man mit den POSTGREY_OPTS:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;$POSTGREY_OPTS --whitelist-clients=whitelist_clients --whitelist-recipients=/etc/postgrey/whitelist_recipients&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Nach getaner Anpassung, den postgrey-Service (neu)starten.&lt;br /&gt;
  # service postgrey restart&lt;br /&gt;
Überprüfen, ob der Dienst läuft z.B. mit:&lt;br /&gt;
 # lsof -i :10023&lt;br /&gt;
Anschließend Postfix reloaden&lt;br /&gt;
 # postfix reload&lt;br /&gt;
und die Mailbox(en) beobachten, hinsichtlich Spamaufkommen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Quelle: Artikel „Postzusteller“, Admin-Magazin, Ausgabe 03-2013)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Dovecot =&lt;br /&gt;
&lt;br /&gt;
Open Source IMAP-Server zum Einliefern der Emails in Postfächer mittels POP3 oder IMAP bzw. IMAPs. Im folgenden wird nur auf IMAPs eingegangen.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Es empfiehlt sich, den Dovecot auf demselben System zu installieren wie Postfix. Andere Fälle werden hier nicht berücksichtigt.&lt;br /&gt;
&lt;br /&gt;
Installation des imapd mittels&lt;br /&gt;
 # apt install dovecot-imapd&lt;br /&gt;
&lt;br /&gt;
Dies reicht für alle Grundfunktionen der Emailauslieferung. Für erweiterte Optionen wie z.B. Filterfunktion können weiter dovecot-Pakete wie  &#039;&#039;&#039;dovecot-antispam, dovecot-sieve&#039;&#039;&#039; installiert werden.&lt;br /&gt;
&lt;br /&gt;
User (i.d.F. &#039;&#039;vmail&#039;&#039;) als Owner für die Mailboxen anlegen:&lt;br /&gt;
&lt;br /&gt;
 useradd -u 4000 -m -d /home/vmail -s /user/sbin/nologin vmail&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Configdateien in &#039;&#039;/etc/dovecot/conf.d&#039;&#039; anpassen.&lt;br /&gt;
Die Datei &#039;&#039;/etc/dovecot/dovecot.conf&#039;&#039; inkludiert per Default alle Dateien unter conf.d/*.conf.&lt;br /&gt;
&lt;br /&gt;
=== Usermanagement ===&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel, wo User in einer separaten Datei abgelegt werden.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;10-auth.conf:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
disable_plaintext_auth = no&lt;br /&gt;
auth_username_format = %n&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login &lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Wenn kein auth über pam:&lt;br /&gt;
 #!include auth-system.conf.ext&lt;br /&gt;
&lt;br /&gt;
Plaintext Auth kann man erlauben, weil die User-Passwörter als gehashter String übertragen werden. Für die Kommunikation zwischen Postfix und Dovecot spielt das ohnehin keine Rolle, da sich beide Dienste auf einem Server befinden. Der Zugriff von einem MUA aus wird über TLS/SSL erfolgen (s.u.).&lt;br /&gt;
&lt;br /&gt;
master user anlegen (optional):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p supergeheim -s SHA512-CRYPT -u administrator@example.de &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Den Output zusammen mit dem Usernamen in die Datei master-users pasten.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat  ../master-users     &lt;br /&gt;
administrator@example.de:{SHA256-CRYPT}$5$9zrt7/e2CDkPmSuA$SNEkm/L4XZcYFAbYkJp5ESl9u35fVBSd4ukO0dm5yp3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sonstige User anlegen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p strenggeheim -s SHA512-CRYPT -u sunflower@example.de&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
→ /etc/dovecot/users:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sunflower:{SHA256-CRYPT}$5$D3PhhtqUhRXT7cmZ$E5244BpvNafb.9FtbhF9AUfbvw8XpnOJhPyM/q/rRN2:::Sun Flo,,,:/var/mail/example.de/sunflower:/bin/false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Hier sollten keine Abkürzungen wie &#039;&#039;%d&#039;&#039; oder &#039;&#039;%n&#039;&#039; stehen, weil diese nicht (von sieve, s.u.) bzw. nur teilweise (von dovecot) interpretiert werden.&lt;br /&gt;
&lt;br /&gt;
Damit der Account auch Email bekomemn kann, ergänzt man die virtual table im Postfix directory:&lt;br /&gt;
  cat sunflower@example.de  example.de/spambucket &amp;gt;&amp;gt; /etc/postfix/virtual&lt;br /&gt;
Aktivieren mit&lt;br /&gt;
 postmap virtual&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Dateirechte ===&lt;br /&gt;
&lt;br /&gt;
Die Files master-users, users sollten nur von dovecot gelesen werden können!&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chgrp dovecot /etc/dovecot/*users&lt;br /&gt;
# chmod o-r /etc/dovecot/*users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mailbox anlegen und User berechtigen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# maildirmake.dovecot /var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
# chown -R vmail.vmail /var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
User im Postfix anlegen, in den virtual maps, s. o.&lt;br /&gt;
&lt;br /&gt;
Kontrolle:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# doveadm user &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IMAP konfigurieren ===&lt;br /&gt;
Protipp: erstmal conf.d wegsichern:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av /etc/doveconf/conf.d /etc/doveconf/conf.d.orig&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Folgende Konfigurationsdateien in conf.d entsprechend anpassen:&lt;br /&gt;
* &#039;&#039;&#039;10-auth.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
disable_plaintext_auth = no&lt;br /&gt;
auth_username_format = %n&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login&lt;br /&gt;
&lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
!include auth-system.conf.ext&lt;br /&gt;
!include auth-passwdfile.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-mail.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail_location = maildir:/var/mail/%d/%n &lt;br /&gt;
namespace inbox { &lt;br /&gt;
 inbox = yes &lt;br /&gt;
} &lt;br /&gt;
mail_uid = 4000 &lt;br /&gt;
mail_gid = 4000&lt;br /&gt;
mail_privileged_group = mail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-master.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service imap-login { &lt;br /&gt;
  inet_listener imaps { &lt;br /&gt;
   port = 993 &lt;br /&gt;
   ssl = yes &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service auth { &lt;br /&gt;
 unix_listener auth-userdb { &lt;br /&gt;
   user = vmail  &lt;br /&gt;
   group = vmail &lt;br /&gt;
 } &lt;br /&gt;
 unix_listener /var/spool/postfix/private/auth { &lt;br /&gt;
   mode = 0666 &lt;br /&gt;
   user = postfix &lt;br /&gt;
   group = postfix &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service stats { &lt;br /&gt;
	unix_listener stats-reader { &lt;br /&gt;
               user = vmail &lt;br /&gt;
               group = vmail &lt;br /&gt;
               mode = 0660 &lt;br /&gt;
     } &lt;br /&gt;
&lt;br /&gt;
	unix_listener stats-writer { &lt;br /&gt;
       user = vmail &lt;br /&gt;
       group = vmail &lt;br /&gt;
       mode = 0660 &lt;br /&gt;
   	} &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-ssl.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# (z.B. Postfix certs verwenden)&lt;br /&gt;
ssl = yes &lt;br /&gt;
ssl_cert = &amp;lt;/etc/ssl/certs/mx.example.de.crt &lt;br /&gt;
ssl_key = &amp;lt;/etc/ssl/private/mx.example.de.key &lt;br /&gt;
ssl_client_ca_dir = /etc/ssl/certs &lt;br /&gt;
ssl_dh = &amp;lt;/usr/share/dovecot/dh.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Zertifikate generieren: s. https://wiki.nomorebluescreen.de/index.php?title=Webserver_mit_Apache#Alternative_letsencrypt&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
Jedes Mal, wenn das Zertifikat ausgetauscht wird, muss der dovecot-Service neu gestartet werden, damit das neue Zertifikat auch eingelesen wird.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Überprüfen, welche Dateien angefasst wurden:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
diff -quw conf.d.orig conf.d&lt;br /&gt;
Files conf.d.orig/10-ssl.conf and conf.d/10-ssl.conf differ&lt;br /&gt;
Files conf.d.orig/15-lda.conf and conf.d/15-lda.conf differ&lt;br /&gt;
Files conf.d.orig/20-imap.conf and conf.d/20-imap.conf differ&lt;br /&gt;
Files conf.d.orig/20-managesieve.conf and conf.d/20-managesieve.conf differ&lt;br /&gt;
Files conf.d.orig/90-sieve.conf and conf.d/90-sieve.conf differ&lt;br /&gt;
Files conf.d.orig/auth-passwdfile.conf.ext and conf.d/auth-passwdfile.conf.ext differ&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ausgabe der gesamten Config&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
 # doveconf -n&lt;br /&gt;
&lt;br /&gt;
==== Sieve ====&lt;br /&gt;
Engine zum Filtern von Emails&lt;br /&gt;
&lt;br /&gt;
dovecot-sieve und dovecot-managesieved  installieren&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;15-lda.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lda_mailbox_autocreate = yes&lt;br /&gt;
lda_mailbox_autosubscribe = yes&lt;br /&gt;
protocol lda { &lt;br /&gt;
 mail_plugins = $mail_plugins sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;20-managesieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
protocols = $protocols sieve&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;90-sieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
plugin { &lt;br /&gt;
 sieve = file:~/sieve;active=~/.dovecot.sieve &lt;br /&gt;
 sieve_default = /var/lib/dovecot/sieve/default.sieve &lt;br /&gt;
 sieve_global_dir = /var/lib/dovecot/sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Kontrolle, ob der sieve-Service läuft und auf Port 4190 lauscht.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# service dovecot restart&lt;br /&gt;
# ss -plnt | grep 4190&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Da der User i.a. nicht direkt auf dem Emailserver sein /home mit den Sieve-Regeln editieren kann, erfolgt die weitere Konfiguration im Email-Client (s.u.).&lt;br /&gt;
&lt;br /&gt;
Achtung Bug:&amp;lt;br&amp;gt;&lt;br /&gt;
Da sieve/dovecot die Variable %n in der users-Datei nicht interpretiert, sollte man diese dort nicht verwenden. Somit kann es passieren, dass von roundcube ein Verzeichnis &#039;&#039;%n&#039;&#039; angelegt wird, in dem sich eine gemeinsame sieve config für &#039;&#039;&#039;alle&#039;&#039;&#039; User befindet.&lt;br /&gt;
&lt;br /&gt;
=== Transport von Postfix zu Dovecot ===&lt;br /&gt;
&lt;br /&gt;
Dem Postfix muss noch beigebracht werden, dass die Emails zum Dovecot gehen.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;master.cf&#039;&#039;&#039; im Postfix anpassen (die Einträge in den {} gehören so, nicht ersetzen!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail    &lt;br /&gt;
 argv=/usr/lib/dovecot/deliver -a ${recipient} -f ${sender} -d $ &lt;br /&gt;
 {user} @${nexthop} -m ${extension}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
und einen mailbox_command Eintrag in der main.cf vornehmen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mailbox_command = /usr/lib/dovecot/dovecot-lda -f &amp;quot;$SENDER&amp;quot; -a &amp;quot;$RECIPIENT&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(https://doc.dovecot.org/configuration_manual/howto/dovecot_lda_postfix/#howto-dovecot-lda-postfix)&lt;br /&gt;
&lt;br /&gt;
Danach noch postfix und dovecot service restarten.&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
Logfiles gehen ebenfalls (wie postfix) nach /var/log/mail.log &amp;lt;br&amp;gt;&lt;br /&gt;
Nützlicher Alias:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
maillog=&#039;tail -f /var/log/maillog&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debugging einschalten:&lt;br /&gt;
 mail_debug = yes&lt;br /&gt;
in der Datei &lt;br /&gt;
&#039;&#039;10-logging.conf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Protipp:&#039;&#039;&#039;&lt;br /&gt;
Wenn im Log folgender Fehler erscheint:&lt;br /&gt;
&lt;br /&gt;
 Mar 27 08:03:56 aphantopus postfix/pipe[2317]: 521066005D: to=&amp;lt;sunflower@example.de&amp;gt;, relay=dovecot, delay=0.3, delays=0.19/0.04/0/0.07, dsn=2.0.0, status=sent (delivered via dovecot service (lda(sunflower@example.de,)Error: net_connect_unix(/var/run/dovecot/stats-writer) failed: Permission denied))&lt;br /&gt;
&lt;br /&gt;
=&amp;gt; In der &#039;&#039;10-master.conf&#039;&#039; &#039;&#039;&#039;stats&#039;&#039;&#039; für User vmail erlauben (s.o.)&lt;br /&gt;
&lt;br /&gt;
= Roundcube =&lt;br /&gt;
&lt;br /&gt;
Praktisches Webfrontend zum Abholen und Verschicken von Emails&lt;br /&gt;
&lt;br /&gt;
Erst mysql-server installieren, sonst bricht die Installation mit einem Fehler ab&lt;br /&gt;
 # apt install mariadb-server roundcube&lt;br /&gt;
Die dbconfig-common Frage mit „yes“ beantworten, mysql-Passwort setzen.&lt;br /&gt;
Config Datei anpassen (&#039;&#039;/etc/roundcube/config.inc.php&#039;&#039;):&lt;br /&gt;
 $config[&#039;smtp_server&#039;] = &#039;localhost&#039;;&lt;br /&gt;
 $config[&#039;smtp_port&#039;] = 25;&lt;br /&gt;
&lt;br /&gt;
== Plugins ==&lt;br /&gt;
&lt;br /&gt;
Standard-Plugings installieren&lt;br /&gt;
 # apt install roundcube-plugins&lt;br /&gt;
&lt;br /&gt;
Weitere Plugins installieren:&lt;br /&gt;
 # apt install roundcube-plugins-extra git curl composer&lt;br /&gt;
(composer braucht man für die Installation von Plugins, git, weil die meisten aus github kommen)&amp;lt;br&amp;gt;&lt;br /&gt;
Die, die man haben will, in der Datei &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; enablen&lt;br /&gt;
&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
        &#039;compose-addressbook&#039;, &lt;br /&gt;
        &#039;markasjunk2&#039;, &lt;br /&gt;
        &#039;fail2ban&#039; &lt;br /&gt;
 );&lt;br /&gt;
&lt;br /&gt;
Übersicht über die offiziellen Plugins:&lt;br /&gt;
&lt;br /&gt;
https://plugins.roundcube.net/&lt;br /&gt;
&lt;br /&gt;
Plugins, die es nicht als Paket gibt:&amp;lt;br&amp;gt;&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
* README lesen&lt;br /&gt;
* Plugin als zip herunterladen, nach &#039;&#039;/usr/share/roundcube/plugins&#039;&#039; entpacken&lt;br /&gt;
* (evtl. umbenennen)&lt;br /&gt;
* &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; bearbeiten:&lt;br /&gt;
Abschnitt&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
suchen und fehlendes Plugin ergänzen&lt;br /&gt;
&lt;br /&gt;
== Filter Plugin for Sieve ==&lt;br /&gt;
&lt;br /&gt;
Achtung, nicht das Plugin „filter“ verwenden, sondern &#039;&#039;&#039;managesieve&#039;&#039;&#039; (ist Bestandteil des roundcube-plugins Paketes)&lt;br /&gt;
&lt;br /&gt;
Eine Anleitung gibt es hier:&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-code-in-roundcube/ &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-in-roundcube/&lt;br /&gt;
&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Den protocols Parameter nicht in der dovecot.conf editieren, sondern in &lt;br /&gt;
&#039;&#039;20-managesieve.conf&#039;&#039; (s.o.):&lt;br /&gt;
&lt;br /&gt;
 protocols = $protocols sieve&lt;br /&gt;
&lt;br /&gt;
Nun kann man über das Webfrontend Sieve-Filterregeln generieren&lt;br /&gt;
&lt;br /&gt;
Achtung Bug:&amp;lt;br&amp;gt;&lt;br /&gt;
Sieve legt ein sieve-Verzeichnis unter dem Verzeichnis an, das in mail_location definiert ist. Wenn man die emails der User unter &#039;&#039;/var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&#039;&#039; ablegen möchte, wird man folgendes konfigurieren:&lt;br /&gt;
&lt;br /&gt;
 mail_location = maildir:/var/mail/%d/%n&lt;br /&gt;
&lt;br /&gt;
Da dovecot aber %d nicht interpretiert (s.o.), liegt das User maildirectory unter /var/mail/&amp;lt;username&amp;gt;. Sieve interpretiert dagegen %n nicht und legt ein Directory /var/mail/&amp;lt;domain&amp;gt;/%n/sieve an, unter der die roundcube.sieve Datei liegt. Somit greifen alle User auf dieselbe Datei zu, was technisch möglich, securitytechnisch aber fatal ist. Leider keine gute Idee zur Abhilfe bekannt.&lt;br /&gt;
&lt;br /&gt;
== Passwort ändern ==&lt;br /&gt;
Um den Usern die Möglichkeit zu geben, ihr Passwort selbst zu ändern, wird in der &#039;&#039;config.inc.php&#039;&#039; das Plugin enabled:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$config[&#039;plugins&#039;] = array(&lt;br /&gt;
 (...)&lt;br /&gt;
 &#039;password&#039;&lt;br /&gt;
);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Weitere Einstellungen, wenn die User in einem Passwortfile gepflegt werden wie im Kapitel &#039;&#039;&#039;Dovecot&#039;&#039;&#039; beschrieben:&amp;lt;br&amp;gt;&lt;br /&gt;
(wir gehen davon aus, dass die Userpasswörter mit sha512 verschlüsselt werden, s.o.)&lt;br /&gt;
&lt;br /&gt;
 # https://stackoverflow.com/questions/62655236/how-to-enable-password-plugin-on-roundcube&lt;br /&gt;
 $config[&#039;password_algorithm&#039;] = &#039;ssha512&#039;;&lt;br /&gt;
 $config[&#039;password_algorithm_prefix&#039;] = &#039;{SSHA512}&#039;;&lt;br /&gt;
 $config[&#039;password_driver&#039;] = &#039;dovecot_passwdfile&#039;;&lt;br /&gt;
 $config[&#039;password_dovecot_passwdfile_path&#039;] = &#039;/etc/dovecot/users&#039;;&lt;br /&gt;
&lt;br /&gt;
Die users Datei vom dovecot muss dann entsprechend für www-data les- und schreibbar sein:&lt;br /&gt;
 -rw-rw---- 1 dovecot www-data 1240 Dec  2 23:20 /etc/dovecot/users&lt;br /&gt;
&lt;br /&gt;
(Achtung, riskant bei eventueller Kompromittierung des Webservers! Als Alternative überlegen, die dovecot-Passwörter in eine [mysql-]DB auszulagern)&lt;br /&gt;
&lt;br /&gt;
== Identities ändern ==&lt;br /&gt;
&lt;br /&gt;
Normalerweise kann ein User nur mit seiner Absenderadresse senden. Das ist eine sinnvolle Einstellung, aber wer das Feature zu Testzwecken abschalten will, kann folgende Einstellung vornehmen:&lt;br /&gt;
 $config[&#039;identities_level&#039;] = 0;&lt;br /&gt;
Nun kann der User über &amp;quot;Einstellungen&amp;quot; weitere Absender hinzufügen (https://www.servercake.blog/multiple-identities-roundcube/)&lt;br /&gt;
&lt;br /&gt;
(Leider bisher keine Möglichkeit gefunden, dies nur auf (einen) bestimmte(n) User einzuschränken)&lt;br /&gt;
&lt;br /&gt;
== Apache Integration ==&lt;br /&gt;
&lt;br /&gt;
Hier eine Beispielkonfiguration für einen Virtual Host, um die Roundcube-Seite unter https://mail.example.de zu erreichen.&lt;br /&gt;
Weiteres im Kapitel [[Webserver mit Apache|apache]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;VirtualHost *:443&amp;gt; &lt;br /&gt;
    ServerName mail.example.de &lt;br /&gt;
    ServerAdmin postmaster@example.de &lt;br /&gt;
&lt;br /&gt;
    SSLEngine on &lt;br /&gt;
    SSLCertificateFile         /var/lib/dehydrated/certs/mail.example.de/fullchain.pem &lt;br /&gt;
    SSLCertificateKeyFile /var/lib/dehydrated/certs/mail.example.de/privkey.pem &lt;br /&gt;
&lt;br /&gt;
    DocumentRoot /usr/share/roundcube &lt;br /&gt;
&lt;br /&gt;
    # Includes &lt;br /&gt;
    Include /etc/apache2/conf-available/ssl-encryption.conf &lt;br /&gt;
      &lt;br /&gt;
    LogFormat &amp;quot;%h %l %u %t \&amp;quot;%r\&amp;quot; %&amp;gt;s %b \&amp;quot;%{Referer}i\&amp;quot; \&amp;quot;%{User-agent}i\&amp;quot;&amp;quot; vhost_combined &lt;br /&gt;
    CustomLog /var/log/apache2/mail-ssl.log combined &lt;br /&gt;
    ErrorLog /var/log/apache2/mail-ssl-error.log &lt;br /&gt;
&amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Das roundcube-Paket bringt zudem noch eine roundcube.conf mit, die unter /etc/apache2/conf-available/roundcube.conf installiert und aktiviert wird.&lt;br /&gt;
&lt;br /&gt;
=== PHP ===&lt;br /&gt;
Damit der Roundcube überhaupt läuft, muss das php Plugin installiert und aktiviert sein. Passiert unter Debian mittels:&lt;br /&gt;
 # apt install libapache2-mod-php&lt;br /&gt;
&lt;br /&gt;
Etwas performanter ist die Verwendung von &#039;&#039;&#039;php-fpm&#039;&#039;&#039; (https://www.zend.com/blog/apache-phpfpm-modphp).&lt;br /&gt;
 # apt install php-fpm&lt;br /&gt;
&lt;br /&gt;
Der default Upload bei PHP sind dürftige 2 MB. Um diesen z.B. auf 50MB raufzudrehen, muss folgende Datei angefasst werden:&lt;br /&gt;
&amp;lt;pre&amp;gt;VERSION=$(php -v| head -n 1 | awk &#039;{ print $2 }&#039; | sed -e &#039;s|.[[:digit:]]*$||&#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
* modphp:&lt;br /&gt;
 /etc/php/${VERSION}/apache2/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
* php-fpm:&lt;br /&gt;
 /etc/php/${VERSION}/fpm/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
&lt;br /&gt;
= Integration in einen MUA =&lt;br /&gt;
Wer nicht über den (langsamen) Webmailer gehen will, kann natürlich auch einen MUA seiner Wahl verwenden. Hier ein Beispiel.&lt;br /&gt;
&lt;br /&gt;
== Thunderbird==&lt;br /&gt;
&lt;br /&gt;
Einstellungen für Outgoing Server (SMTP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Servername: FQDN des Email-Servers&lt;br /&gt;
Port: 25&lt;br /&gt;
Connection Security: STARTTLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Beim 1. Mal wird man nach seinem Mailbox-Passwort gefragt. Dieses eingeben und speichern.&lt;br /&gt;
&lt;br /&gt;
Server Settings (IMAP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Server Type: IMAP Mail Server&lt;br /&gt;
Server Name: FQDN des Email-Servers&lt;br /&gt;
Port: 993&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
Connection Security: SSL/TLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
(Die restlichen Defaults so belassen oder bei Bedarf anpassen)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot thunderbird1.png|900px]]&lt;br /&gt;
[[File:Screenshot thunderbird2.png|900px]]&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Webserver_mit_Apache&amp;diff=670</id>
		<title>Webserver mit Apache</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Webserver_mit_Apache&amp;diff=670"/>
		<updated>2026-03-10T21:23:34Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* php einbinden */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Einer der häufigsten genutzten Webserver ist [https://httpd.apache.org Apache]. Abgesehen davon, dass man bereits mit der Standardkonfiguration relativ schnell zu einer einfachen Website kommt, bringt er den Vorteil, dass viele Module bereits eingebunden oder schnell nachzuinstallieren sind (z.B. libapache2.php). Zudem werden regelmäßig Updates gegen mögliche Exploits auf diesen weit verbreiteten Webdienst geliefert, also Versionsstand aktuell halten!&lt;br /&gt;
Zum Entstehungszeitpunkt dieses Dokuments war die aktuellste Version der apache2. Wir verlassen uns aber nicht auf diese Aussage, sondern prüfen den Versionsstand selbst nach mit &lt;br /&gt;
 # apt search apache&lt;br /&gt;
Danach installieren wir die aktuellste Version:&lt;br /&gt;
 # apt install apache2&lt;br /&gt;
Nach der Installation läuft der Dienst auch schon. Überprüfung mit &lt;br /&gt;
 # service apache2 status&lt;br /&gt;
&lt;br /&gt;
==Konfigurationsdateien==&lt;br /&gt;
Diese befinden sich im Verzeichnis /etc/apache2. Die wichtigste Datei ist die apache2.conf. Diese verweist auf weitere Dateien und Unterverzeichnisse. &amp;lt;br&amp;gt;&lt;br /&gt;
In einem Standard /etc/apache2-Verzeichnis wird man folgende Dateien und Ordner vorfinden:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;&lt;br /&gt;
apache2.conf    &amp;lt;br&amp;gt;&lt;br /&gt;
conf-enabled  &amp;lt;br&amp;gt;&lt;br /&gt;
magic           &amp;lt;br&amp;gt;&lt;br /&gt;
mods-enabled  &amp;lt;br&amp;gt;&lt;br /&gt;
sites-available &amp;lt;br&amp;gt;&lt;br /&gt;
conf-available &amp;lt;br&amp;gt;&lt;br /&gt;
envvars &amp;lt;br&amp;gt;&lt;br /&gt;
mods-available &amp;lt;br&amp;gt;&lt;br /&gt;
ports.conf &amp;lt;br&amp;gt;&lt;br /&gt;
sites-enabled&lt;br /&gt;
&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Dies ist die Verzeichnisstruktur für Debian. Andere Distributionen können sich anders verhalten, z.B. werden unter CentOS  keine sites-available/sites-enabled verwendet.&lt;br /&gt;
Wie gesagt, handelt es sich bei der &#039;&#039;apache2.conf&#039;&#039; um die Hauptkonfigurationsdatei. Diese nur mit Vorsicht anfassen, weil Gefahr besteht, dass die Änderungen mit dem nächsten Update überschrieben werden.&lt;br /&gt;
&lt;br /&gt;
Die eigentliche Konfigurationsdateien befinden sich in &#039;&#039;sites-available&#039;&#039; bzw. &#039;&#039;conf-available&#039;&#039;. Meistens wird &#039;&#039;sites-available&#039;&#039; für Virtualhosts (s.u.) benützt und &#039;&#039;conf-available&#039;&#039; für Konfig, die für bestimmte Services gelten soll. Hier speichern i.d.R. auch webservice-related Software-Pakete ihre Konfig  (z.B. Nagios oder Mediawiki).&lt;br /&gt;
Die Dateien, die hier abgelegt werden, sind aber noch nicht aktiv. Dies passiert per Symlink in den Verzeichnissen &#039;&#039;conf-enabled&#039;&#039; und &#039;&#039;sites-enabled&#039;&#039;. &#039;&#039;&#039;Darauf achten, dass die Symlinks immer mit .conf enden !&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Module ==&lt;br /&gt;
&lt;br /&gt;
Module sind kleine Codestücke oder Konfigschnipsel, die die Funktionalität des Webservers erweitern. Es gibt z.B. welche für andere Scriptsprachen (php, cgi), für Verschlüsselung, Authentifizierung u.v.m.&lt;br /&gt;
Im Verzeichnis &#039;&#039;mods-available&#039;&#039; sind bereits einige Standardmodule abgelegt. Diese lassen sich durch einen Symlink im &#039;&#039;mods-enabled&#039;&#039; Verzeichnis aktivieren oder durch das a2enmod Kommando, z.B.&lt;br /&gt;
 # a2enmod ssl&lt;br /&gt;
&lt;br /&gt;
==Wohin mit den Webseiten==&lt;br /&gt;
Der Standardordner für Webseiten ist das Verzeichnis /var/www/. Wer weitere/andere Verzeichnisse festlegen will, kann das in der Datei sites-available/&amp;lt;filename&amp;gt; im Abschnitt &#039;&#039;&#039;DocumentRoot&#039;&#039;&#039; tun.&lt;br /&gt;
&lt;br /&gt;
Falls der Index nicht automatisch geladen werden sollte, lässt sich das mittels Option &amp;quot;DirectoryIndex&amp;quot; bewirken, z.B.&lt;br /&gt;
 DirectoryIndex index.php &lt;br /&gt;
oder&lt;br /&gt;
 DirectoryIndex index.html&lt;br /&gt;
&lt;br /&gt;
Neustart des Apache nach Konfigurationsänderungen erfolgt mittels&lt;br /&gt;
 # systemctl reload apache2&lt;br /&gt;
&lt;br /&gt;
==Virtual Hosting==&lt;br /&gt;
Mal angenommen, ihr entschließt euch, mehrere Domains auf einem Webserver zu hosten. Dafür gibt es die Virtual-Host-Einträge in der Datei sites-available/&amp;lt;domain&amp;gt;.conf (Im Grunde kann die Datei heißen wie sie will, aber der Übersicht halber wird man domainnamen vergeben). In unserem Fall wollen wir 2 Domains hosten, example.de unter /var/www/example und nomorebluescreen.com unter /var/www/nmbs. Zu diesem Zweck werden unter /etc/apache2/sites-available 2 Dateien angelegt, z.B.  example.conf und nmbs.conf.&lt;br /&gt;
&lt;br /&gt;
1. Datei:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;VirtualHost *:80&amp;gt;&lt;br /&gt;
        ServerName www.example.de&lt;br /&gt;
        ServerAlias ...&lt;br /&gt;
        ServerAdmin webmaster@example.de&lt;br /&gt;
        (...)&lt;br /&gt;
  &amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Datei:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;VirtualHost *:80&amp;gt;&lt;br /&gt;
        ServerName www.nomorebluescreen.com&lt;br /&gt;
        (...)&lt;br /&gt;
 &amp;lt;/Virtual Host&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sollte es für eine Domain einen IP-Eintrag, aber noch keine config-Datei geben, wird beim Aufruf der Seite per default die Domain genommen, die im Alphabet als 1. kommt.&amp;lt;br&amp;gt;&lt;br /&gt;
Um die Änderung wirksam zu machen, müssen diese Seiten noch nach sites-enabled verlinkt sein. Hierbei darauf achten, dass die Dateiendung .conf ist, sonst wird die Datei ignoriert.&lt;br /&gt;
&lt;br /&gt;
Die entsprechenden Namen müssen natürlich im DNS (oder zu Testzwecken auch in der /etc/hosts) hinterlegt sein!&lt;br /&gt;
&lt;br /&gt;
==Test==&lt;br /&gt;
Standardeinstellungen: Ein Connect mit dem Browser auf die IP des Webservers sollte die eingestellte Indexseite oder den Apache-Default „It works!“ zeigen.&lt;br /&gt;
Virtual Hosting: Nach Eingabe der unterschiedlichen Servernamen sollten die entsprechenden Webseiten zu sehen sein.&lt;br /&gt;
&lt;br /&gt;
Ein einfacher Configcheck geht mit&lt;br /&gt;
 apache2ctl -t&lt;br /&gt;
Hier sollte &amp;quot;Syntax OK&amp;quot; zurückkommen.&lt;br /&gt;
&lt;br /&gt;
==Logfiles==&lt;br /&gt;
Diese befinden sich im Verzeichnis /var/log/apache2&lt;br /&gt;
&lt;br /&gt;
Mit der Direktive&lt;br /&gt;
 CustomLog&lt;br /&gt;
bzw.&lt;br /&gt;
 ErrorLog&lt;br /&gt;
in der entsprechenden Konfigdatei kann (auch für VirtualHosts getrennt) der Pfad zur Logdatei angegeben werden.&lt;br /&gt;
&lt;br /&gt;
Es gibt die Möglichkeit, die geloggten Parameter selbst zu definieren: https://httpd.apache.org/docs/2.4/logs.html#accesslog&lt;br /&gt;
&lt;br /&gt;
Hier ein paar Beispiele, wie man in den Logfiles suchen kann:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
request per hour&lt;br /&gt;
cat access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: &#039;{print $2&amp;quot;:00&amp;quot;}&#039; | sort -n | uniq -c&lt;br /&gt;
&lt;br /&gt;
request per hour by date&lt;br /&gt;
grep &amp;quot;23/Jan&amp;quot; access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: &#039;{print $2&amp;quot;:00&amp;quot;}&#039; | sort -n | uniq -c&lt;br /&gt;
&lt;br /&gt;
request per hour by IP&lt;br /&gt;
grep &amp;quot;XX.XX.XX.XX&amp;quot; access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: &#039;{print $2&amp;quot;:00&amp;quot;}&#039; | sort -n | uniq -c&lt;br /&gt;
&lt;br /&gt;
requests per minute:&lt;br /&gt;
cat access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: &#039;{print $2&amp;quot;:&amp;quot;$3}&#039; | sort -nk1 -nk2 | uniq -c&lt;br /&gt;
&lt;br /&gt;
requests per minute for date:&lt;br /&gt;
grep &amp;quot;02/Nov/2017&amp;quot; access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: &#039;{print $2&amp;quot;:&amp;quot;$3}&#039; | sort -nk1 -nk2 | uniq -c&lt;br /&gt;
&lt;br /&gt;
requests per minute for url:&lt;br /&gt;
grep &amp;quot;[url]&amp;quot; access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: &#039;{print $2&amp;quot;:&amp;quot;$3}&#039; | sort -nk1 -nk2 | uniq -c&lt;br /&gt;
&lt;br /&gt;
per IP per minute&lt;br /&gt;
grep &amp;quot;XX.XX.XX.XX&amp;quot; access.log | cut -d[ -f2 | cut -d] -f1 | awk -F: &#039;{print $2&amp;quot;:&amp;quot;$3}&#039; | sort -nk1 -nk2 | uniq -c&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
s.a. http://www.inmotionhosting.com/support/website/server-usage/view-level-of-traffic-with-apache-access-log&lt;br /&gt;
&lt;br /&gt;
==php einbinden==&lt;br /&gt;
Dies ist zumindest unter Debian sehr trivial, weil nur das Paket php und das entsprechende Modul installiert werden muss.&lt;br /&gt;
&lt;br /&gt;
Mit &lt;br /&gt;
 # aptitude search php&lt;br /&gt;
informieren wir uns über die aktuelle Version. Mit &amp;quot;aptitude install php&amp;lt;version&amp;gt;&amp;quot; wird diese installiert.&lt;br /&gt;
&lt;br /&gt;
Um den apache php-fähig zu bekommen, muss außerdem das php-Modul installiert werden (wird u.U. schon bei der apache-Installation mitgeliefert):&lt;br /&gt;
 # aptitude install libapache2-mod-php&lt;br /&gt;
&lt;br /&gt;
Kontrollieren, ob das Modul bereits geladen ist:&lt;br /&gt;
 # ls /etc/apache2/mods-enabled/|grep php&lt;br /&gt;
Hier sollte (je nach Version) zu sehen sein:&lt;br /&gt;
 php7.0.conf&lt;br /&gt;
 php7.0.load&lt;br /&gt;
&lt;br /&gt;
Wenn nicht, gibt es den Befehl &#039;&#039;&#039;a2enmod&#039;&#039;&#039;, um Module zu installieren:&lt;br /&gt;
 # a2enmod php7&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der Neustart des apache wird während der php-Installation übernommen.&lt;br /&gt;
&lt;br /&gt;
Eine bessere Alternative ist das modernere &#039;&#039;&#039;php-fpm&#039;&#039;&#039;. Auch dieses kann als Paket installiert werden.&lt;br /&gt;
Achtung: php-fpm muss für jede Version extra aktiviert werden, e.g.:&lt;br /&gt;
 # a2enconf php8.2-fpm&lt;br /&gt;
Danach muss der Webserver neu gestartet werden.&lt;br /&gt;
&lt;br /&gt;
https://www.simplified.guide/apache/configure-php-fpm&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel für ein schlichtes Testscript:&lt;br /&gt;
 &amp;lt;?php&lt;br /&gt;
    phpinfo();&lt;br /&gt;
 ?&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Bei Ausführung desselben ist im Browser folgende Auflistung zu sehen&lt;br /&gt;
&lt;br /&gt;
[[File:Phpinfo2.jpeg|500px|center|phpinfo]]&lt;br /&gt;
&lt;br /&gt;
== Directory Listing erlauben ==&lt;br /&gt;
Wenn in einem Unterverzeichnis von DocumentRoot eine Index-Datei (index.html, index.php, ... ) fehlt, bekommt der Benutzer normalerweise eine Fehlermeldung&lt;br /&gt;
&lt;br /&gt;
 Forbidden&lt;br /&gt;
 You don&#039;t have permission to access /images/ on this server.&lt;br /&gt;
 Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.&lt;br /&gt;
&lt;br /&gt;
Wenn man stattdessen möchte, dass der Benutzer alle Dateien aufgelistet bekommt, muss man folgendes einrichten&lt;br /&gt;
 &amp;lt;Directory /var/www/images&amp;gt;&lt;br /&gt;
 	Option +Indexes&lt;br /&gt;
 &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Falls die Option auf alle Verzeichnisse zutreffen soll, die den String „images“ enthalten, macht man besser einen DirectoryMatch&lt;br /&gt;
 &amp;lt;DirectoryMatch images&amp;gt;&lt;br /&gt;
        Option +Indexes&lt;br /&gt;
 &amp;lt;/DirectoryMatch&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Security ==&lt;br /&gt;
=== .htaccess und .htpasswd ===&lt;br /&gt;
&lt;br /&gt;
Zum Passwortschutz einzelner Ordner bieten sich o.g. Dateien an. Die .htaccess beinhaltet, wer sich überhaupt wie anmelden darf, die .htpasswd beinhaltet die Passwörter&lt;br /&gt;
&lt;br /&gt;
Beispiel für eine .htaccess:&lt;br /&gt;
 AuthType basic &lt;br /&gt;
 AuthName &amp;quot;Access limited&amp;quot;&lt;br /&gt;
 AuthUserFile /var/www/download/.htpasswd&lt;br /&gt;
 Require valid-user&lt;br /&gt;
&lt;br /&gt;
Die .htpasswd wird mit folgendem Command erzeugt&lt;br /&gt;
 htpasswd -c &amp;lt;dateiname&amp;gt; &amp;lt;username&amp;gt;&lt;br /&gt;
Dann erfolgt eine Eingabeaufforderung für das Passwort. Vorhandene User in einer htpasswd ändert man durch Weglassen der Option -c&lt;br /&gt;
&lt;br /&gt;
Weitere nützliche Optionen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 $ htpasswd -n &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(erzeugt nur eine Ausgabe, die man irgendwohin pasten kann)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 $ htpasswd -B .htpasswd &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use bcrypt instead of md5. Sicherer Algorithmus.&lt;br /&gt;
&lt;br /&gt;
Die .htaccess schützt automatisch auch alle Unterverzeichnisse durch ein Passwort. Wenn man das nicht will, muss man in seiner Config für jedes der Verzeichnisse eine Ausnahme generieren, z.B.&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;DirectoryMatch &amp;quot;/fotos/fotoclub/example/images/&amp;quot;&amp;gt;&lt;br /&gt;
                Satisfy any&lt;br /&gt;
 &amp;lt;/DirectoryMatch&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== DOS Attacken ===&lt;br /&gt;
&lt;br /&gt;
Zum Abwehren von (D)DOS-Attacken gibt es das Modul &#039;&#039;&#039;mod-evasive&#039;&#039;&#039;.&lt;br /&gt;
Das zugehörige Paket  heißt &#039;&#039;&#039;libapache2-mod-evasive&#039;&#039;&#039;.&amp;lt;br&amp;gt;&lt;br /&gt;
Protipp: Die Parameter &#039;&#039;DOSPageCount&#039;&#039; und &#039;&#039;DOSSiteCount&#039;&#039; nicht zu niedrig wählen, sonst blockt man möglicherweise auch normale Webrequests.&lt;br /&gt;
&lt;br /&gt;
Configbeispiel:&amp;lt;br&amp;gt;&lt;br /&gt;
https://github.com/kdx99/apache-role/blob/main/vars/main.yaml&amp;lt;br&amp;gt;&lt;br /&gt;
(Abschnitt # mod_evasive)&lt;br /&gt;
&lt;br /&gt;
Weiteres hier: &amp;lt;br&amp;gt;&lt;br /&gt;
* https://phoenixnap.com/kb/apache-mod-evasive&lt;br /&gt;
* https://www.howtogeek.com/devops/how-to-configure-mod_evasive-for-apache-ddos-protection/&lt;br /&gt;
&lt;br /&gt;
=== Fail2ban ===&lt;br /&gt;
Zum Ausbremsen von Brute-Force-Angriffen hilft fail2ban, ein Tool, das zu häufige Fehlversuche beim Anmelden mit einer IP-Tables-Regel auf die zugehörige Source IP quittiert. &lt;br /&gt;
&lt;br /&gt;
==== Beispiel ====&lt;br /&gt;
&lt;br /&gt;
Erstmal das Paket „fail2ban&amp;quot; installieren, dann ins Verzeichnis /etc/fail2ban wechseln.&lt;br /&gt;
&lt;br /&gt;
Die Optionen in der &#039;&#039;fail2ban.conf&#039;&#039;  sind normal bereits auf sinnvolle Werte gesetzt. Gegebenenfalls lässt sich bei Problemen noch das Loglevel raufsetzen &lt;br /&gt;
&lt;br /&gt;
 loglevel = 4 # default=3&lt;br /&gt;
Achtung: nach dem Debuggen wieder auf default (=3) setzen. Die Logfiles werden sonst sehr schnell sehr groß!&amp;lt;br&amp;gt;&lt;br /&gt;
Nun kann man in der &#039;&#039;jail.conf&#039;&#039; die unterschiedlichen jails für die einzelnen Dienste einrichten. Für mache gibt es auch Subsections, z.B. für apache:&lt;br /&gt;
&lt;br /&gt;
 [apache-auth]&lt;br /&gt;
 &lt;br /&gt;
 enabled  = true &lt;br /&gt;
 port     = http,https&lt;br /&gt;
 filter   = apache-auth&lt;br /&gt;
 logpath  = %(apache_error_log)s&lt;br /&gt;
 maxretry = 10 &lt;br /&gt;
 bantime = 5m&lt;br /&gt;
&lt;br /&gt;
Bei den allgemeinen Optionen kann man Excludes für einzelne IPs oder Netze angeben&lt;br /&gt;
 ignoreip = 127.0.0.1/8 192.168.99.0/24&lt;br /&gt;
Achtung: Als Trenner ein Space benutzen. Kommas o.ä. werden zwar akzeptiert, danach funktioniert aber die Banaction nicht.&lt;br /&gt;
Bei „banaction“ sollte im allgemeinen Teil „iptables-multiport&amp;quot; oder &amp;quot;nftables-multiport&amp;quot; eingestellt sein. Dienst neu starten, fertig.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Erfolgskontrolle&amp;lt;/u&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
n- mal falsch anmelden und „iptables -L -n -v“ ausführen (zum testen evtl. eine etwas kürzere bantime wählen)&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Seit bullseye ist per Default nicht mehr iptables, sondern nftables am Start. fail2ban schert sich da wenig drum, versucht iptables-Regeln zu generieren und wirft einen Fehler.&amp;lt;br&amp;gt;&lt;br /&gt;
Abhilfe: In der jail.conf alles, was mit &#039;&#039;iptables-&#039;&#039; zu tun hat, durch &#039;&#039;nftables-&#039;&#039; ersetzen, e.g. nftables-multiport, nftables-allports, nftables-multiport-log&lt;br /&gt;
&lt;br /&gt;
Nach n Fehlversuchen, folgendes Kommando absetzen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 # nft list table inet f2b-table&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Debugging ====&lt;br /&gt;
Bei Problemen kann man den Dienst auch im Vordergrund starten:&lt;br /&gt;
 # fail2ban-server -f&lt;br /&gt;
und in einer anderen Shell&lt;br /&gt;
 # fail2ban-client reload&lt;br /&gt;
&lt;br /&gt;
===Verschlüsselte Webseiten===&lt;br /&gt;
Feind hört mit (Gerüchten zufolge zuweilen auch Freund). Also am besten nichts im Klartext übers Netz schicken. Verschlüsselt wird bei apache üblicherweise mit TLS, die Echtheit (Authentizität) der Webseite wird durch ein Zertifikat überprüft.&lt;br /&gt;
Wer geschäftlich eine Seite (Webshop o.ä.) betreibt, wird sein Zertifikat höchstwahrscheinlich durch eine offizielle Zertifizierungsstelle (Certificate Authority) signieren lassen. Privatanwender signieren u.U. selbst (nicht empfohlen).&lt;br /&gt;
&lt;br /&gt;
==== Alternative Zertifikat selbst erzeugen ====&lt;br /&gt;
Von dieser Möglichkeit wird abgeraten, da es hier keine offizielle CA für das Zertifikat gibt, d.h. die Clients wissen nicht, ob die Website vertrauenswürdig ist. Hässliche Warnings sind die Folge bzw. im worst case wird die Seite gar nicht ausgeliefert. Besser eine offizielle CA verwenden (2 Alternativen gibt es weiter unten).&lt;br /&gt;
&lt;br /&gt;
Schritte von der Zertifikatserstellung zur verschlüsselten Webseite:&lt;br /&gt;
&lt;br /&gt;
1. Verzeichnis für Zertifikate anlegen&lt;br /&gt;
 mkdir /etc/apache2/ssl/&lt;br /&gt;
2. Request, Key und gleichzeitig Zertifikat erstellen &lt;br /&gt;
 openssl req -x509 -nodes -days 720 -newkey rsa:4096 -keyout /etc/apache2/ssl/www.example.de.key -out /etc/apache/ssl/www.example.de.crt&lt;br /&gt;
&lt;br /&gt;
Darauf achten, dass bei FQDN unbedingt der Servername eingetragen wird, unter dem der Webserver erreichbar ist (i.d.R. identisch mit dem Wert für ServerName im apache)&lt;br /&gt;
&lt;br /&gt;
==== Alternative Cert Request für offizielle CA erstellen ====&lt;br /&gt;
1. Key erstellen&lt;br /&gt;
 openssl genrsa -out www.example.de.key 4096&lt;br /&gt;
2. Zertifikats-Request erstellen&lt;br /&gt;
 openssl req -new -key www.example.de.key -out www.example.de.csr&lt;br /&gt;
3. Das .csr file an die Zertifizierungsstelle schicken&amp;lt;br&amp;gt;&lt;br /&gt;
in der Regel kommt z.B. ein File namens www.example.de.crt zurück&amp;lt;br&amp;gt;&lt;br /&gt;
4. Überprüfen des Zertifikats:&lt;br /&gt;
 openssl x509 -in www.example.de.crt -text -noout&lt;br /&gt;
&lt;br /&gt;
Alternativ bietet sich das im Paket openssl enthaltene Tool &#039;&#039;&#039;CA.pl&#039;&#039;&#039; an (hier nicht weiter ausgeführt, da ungetestet).&lt;br /&gt;
&lt;br /&gt;
==== Konfguration des Webservers ====&lt;br /&gt;
Die folgende Schritte sind wieder identisch, unabhängig davon, wie das Zertifikat erzeugt wurde.&lt;br /&gt;
&lt;br /&gt;
1. mod_ssl aktivieren&lt;br /&gt;
 cd /etc/apache2/sites-enabled/&lt;br /&gt;
 ln -s ../mods-available/ssl.conf&lt;br /&gt;
 ln -s ../mods-available/ssl.load&lt;br /&gt;
&lt;br /&gt;
2. Virtual-Host auf Port 443 einrichten&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;VirtualHost *:443&amp;gt; 	  	&lt;br /&gt;
                 ServerAdmin webmaster@example.de&lt;br /&gt;
        	  	ServerName  www.example.de:443&lt;br /&gt;
        	  	ServerAlias mail.example.de:443&lt;br /&gt;
        	  	SSLEngine on&lt;br /&gt;
        	  	SSLCertificateFile       /etc/apache2/ssl/www.example.de.crt&lt;br /&gt;
        	  	SSLCertificateKeyFile    /etc/apache2/ssl/www.example.de.key&lt;br /&gt;
                       SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1&lt;br /&gt;
                       SSLCipherSuite \ EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH&lt;br /&gt;
 &amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Letztere Parameter lassen nur sichere Protokolle (TLS&amp;gt;=2) und Cipher-Suites mit starker Verschlüsselung zu.&amp;lt;br&amp;gt;&lt;br /&gt;
Ein Beispiel für geeignete Cipher-Suites gibt es auf https://cipherlist.eu/.&lt;br /&gt;
&lt;br /&gt;
3. Redirects&amp;lt;br&amp;gt;&lt;br /&gt;
Wer es nicht dem Zufall überlassen will, ob der Benutzer im Browser http:// oder https:// eingibt, kann eine automatische Weiterleitung zur verschlüsselten Seite einrichten:&lt;br /&gt;
 a2enmod rewrite&lt;br /&gt;
&lt;br /&gt;
Dann im VirtualHost:80-Abschnitt folgendes eintragen (z.B. wenn „download“ im URL-Name dann auf https umleiten)&lt;br /&gt;
&lt;br /&gt;
        RewriteEngine On&lt;br /&gt;
        RewriteCond %{HTTPS} !=on&lt;br /&gt;
        RewriteRule ^/?download(.*) https://%{SERVER_NAME}/download/$1 [R,L]&lt;br /&gt;
&lt;br /&gt;
=====Error=====&lt;br /&gt;
&lt;br /&gt;
Sollten beim Erneuern Fehlermeldungen auftauchen aka&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
An unexpected error occurred:&lt;br /&gt;
OpenSSL.SSL.Error: [(&#039;x509 certificate routines&#039;, &#039;X509_load_cert_crl_file&#039;, &#039;no certificate or crl found&#039;)]&lt;br /&gt;
Please see the logfiles in /var/log/letsencrypt for more details.&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
gibt es ein Problem mit den default ca-certificates. In dem Fall hilft ein:&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
&amp;gt;# update-ca-certificates&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative [https://letsencrypt.com/ letsencrypt]====&lt;br /&gt;
&lt;br /&gt;
Hier sind keine openssl-Commands nötig. Man verwendet als Root CA letsencrypt. Geholt werden die Zertifikate über einen sogenannten „certbot“, der einmalig eingerichtet wird und dann als Cron weiterläuft.&lt;br /&gt;
&lt;br /&gt;
 # apt install python3-certbot-apache&lt;br /&gt;
&lt;br /&gt;
Eine Art Manpage erhält man mit dem Command&lt;br /&gt;
 # certbot (--help)&lt;br /&gt;
&lt;br /&gt;
===== Initial Zertifikate erzeugen =====&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # certbot --apache certonly&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
(interaktive Abfragen z.B. fqdn, evtl. config file)&lt;br /&gt;
&lt;br /&gt;
Anschließend hat man im Verzeichnis /etc/letsencrypt haufenweise neue Dateien. Die eigentlichen Zertifikate findet man im Verzeichnis „live“.&lt;br /&gt;
* archive: Alte Zertifikate&lt;br /&gt;
* options-ssl-apache.conf: Configschnipsel&lt;br /&gt;
* accounts: private Key fürs Erzeugen der nonce (don&#039;t lose!)&lt;br /&gt;
Die Zertifikate, die automatisch erzeugt werden, landen im &#039;&#039;&#039;renewal&#039;&#039;&#039; Verzeichnis.&lt;br /&gt;
&lt;br /&gt;
Die entstandenen Zertifikate können entweder ins entsprechende Verzeichnis (/etc/ssl/...) kopiert werden (dann muss man aber dafür sorgen, dass das ggf. automatisch passiert) oder man verwendet den direkten Pfad in der Konfig, z.B.:&lt;br /&gt;
&lt;br /&gt;
 SSLCertificateFile            /etc/letsencrypt/live/mail.example.de/fullchain.pem     &lt;br /&gt;
 SSLCertificateKeyFile     /etc/letsencrypt/live/mail.example.de/privkey.pem&lt;br /&gt;
&lt;br /&gt;
Wobei fullchain.pem sowohl das Zwischenzertifikat als auch das eigentliche Zertifikat enthält.&amp;lt;br&amp;gt;&lt;br /&gt;
Natürlich lässt sich der Certbot auf alle Virtual Hosts anwenden.&lt;br /&gt;
&lt;br /&gt;
In der Datei /etc/cron.d/certbot sollte sich der Cronjob befinden, der alle 12h prüft, ob ein Zertifikat &amp;lt; 30 Tage gültig ist und ggf. erneuert.&lt;br /&gt;
&lt;br /&gt;
=====Manuell erneuern=====&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # cerbot renew&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Dehydrated ====&lt;br /&gt;
Eine Alternative zum certbot ist dehydrated. Hierfür braucht es einen offenen Port 80, den es bei einem Webserver i.a. gibt. Einfach das Paket &#039;&#039;&#039;dehydrated-apache2&#039;&#039;&#039; installieren. Danach den benötigten Hostname in die Datei domains.txt einfügen.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 # echo $HOSTNAME &amp;gt;&amp;gt; /etc/dehydrated/domains.txt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Dort können auch mehrere Hostnames stehen.&lt;br /&gt;
&lt;br /&gt;
Zertifikat generieren:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 # dehydrated --register --accept-terms&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 # dehydrated -c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Die Zertifikate liegen dann im Verzeichnis /var/lib/dehydrated/&lt;br /&gt;
&lt;br /&gt;
Auch hierfür wird per Default ein Cronjob angelegt.&lt;br /&gt;
&lt;br /&gt;
=== Immer verschlüsseln ===&lt;br /&gt;
&lt;br /&gt;
Wer immer von http auf https umleiten will, kann in seine Config folgendes eintragen:&lt;br /&gt;
&lt;br /&gt;
 RewriteCond %{HTTPS} !=on&lt;br /&gt;
 RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}  [L]&lt;br /&gt;
&lt;br /&gt;
Alternativ geht auch folgendes:&lt;br /&gt;
 &amp;lt;IfModule mod_ssl.c&amp;gt;&lt;br /&gt;
        &amp;lt;IfModule mod_rewrite.c&amp;gt;&lt;br /&gt;
                RewriteEngine On&lt;br /&gt;
                RewriteRule (.*) https://%{HTTP_HOST}$1 [R,L]&lt;br /&gt;
        &amp;lt;/IfModule&amp;gt;&lt;br /&gt;
 &amp;lt;/IfModule&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Das waren die ersten Schritte mit Apache. Viel Spaß beim Konfigurieren und Probieren!&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=669</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=669"/>
		<updated>2026-03-10T13:51:04Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
updated for version: 4.19&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
(in common)&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(--api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)         (core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
= Nodes =&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Draining nodes ==&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary when you drain it s. below - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Machines =&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Machinesets ==&lt;br /&gt;
&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
== Delete and re-create machines/nodes ==&lt;br /&gt;
 oc get machines -A | grep worker-&amp;lt;XY&amp;gt; | wc -l&lt;br /&gt;
-&amp;gt; MACHINECOUNT&lt;br /&gt;
 oc annotate machine/&amp;lt;machine-name&amp;gt; -n openshift-machine-api machine.openshift.io/delete-machine=&amp;quot;true&amp;quot;&lt;br /&gt;
 oc scale --replicas=&amp;lt;$MACHINECOUNT+1&amp;gt; machineset &amp;lt;machineset&amp;gt; n openshift-machine-api&lt;br /&gt;
 oc scale --replicas=$MACHINECOUNT machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
= Projects/Namespaces =&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com &lt;br /&gt;
* registry.redhat.io  (with login only)&lt;br /&gt;
* quay.io&lt;br /&gt;
* docker.io&lt;br /&gt;
&lt;br /&gt;
= Images =&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
  skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
  skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
  skopeo inspect --config docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215  docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
== Create pod from image ==&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
= Apps =&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
= Deployments =&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
== Environment variables ==&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1  MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb --from=secret/my-secret (--prefix=MYSQL_)&lt;br /&gt;
&lt;br /&gt;
== Restart deployment after change ==&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt; /var/www/html &amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make deployment available from inside/outside ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Add probes ==&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
== Autoscale Pods ==&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
== Reduce/Upgrade cpu/mem requests ==&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
== Security ==&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Pods =&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
= Jobs and Cronjobs =&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Secrets =&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
=== Update Secret ===&lt;br /&gt;
 $ oc set data secret/&amp;lt;mysecret&amp;gt; --from-file /tmp/root-password&lt;br /&gt;
&lt;br /&gt;
=== Extract secret ===&lt;br /&gt;
 $ oc extract secret /&amp;lt;mysecret&amp;gt; --to /tmp/mysecret (--confirm)&lt;br /&gt;
&lt;br /&gt;
= Configmaps =&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets (REVIEW!)&lt;br /&gt;
 $ oc get egressips&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Changes with &#039;&#039;&#039;patch&#039;&#039;&#039; command =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=668</id>
		<title>Emailserver mit Postfix und Dovecot</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Emailserver_mit_Postfix_und_Dovecot&amp;diff=668"/>
		<updated>2026-02-22T21:28:22Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Greylisting und Antispam */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Postfix =&lt;br /&gt;
&lt;br /&gt;
Postfix ist ein MTA (Mail Transfer Agent), der eine gute Alternative zu anderen gängigen  MTAs (Sendmail, Exim) darstellt, da seine Konfiguration gut lesbar ist. &lt;br /&gt;
In unserem Beispiel soll der MTA mit einem IMAPd (Dovecot) verknüpft werden, so dass  Benutzer eine Mailbox direkt auf dem System haben. Das Abholen der Mails erfolgt per IMAPs.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Zunächst muss das Paket „postfix“ installiert werden.&lt;br /&gt;
Dabei sind noch ein paar Fragen zu beantworten:&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Art des Servers:	&#039;&#039;&#039;Internet Site&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Root and postmaster mail recipient:&#039;&#039;&#039; ein Postfach eintragen, z.B.postmaster@example.de &amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Other destinations to accept mail for (blank for none):&#039;&#039;&#039; z.B. mail.example.de, localhost, $mydomain (kann man erstmal die defaults belassen)&lt;br /&gt;
* &#039;&#039;&#039;Force synchronous updates on mail queue:&#039;&#039;&#039; no&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Local subnets:&#039;&#039;&#039; 127.0.0.1/8, 192.168.63.0/24 (hier das eigene Netz ergänzen)&lt;br /&gt;
* &#039;&#039;&#039;Mailbox size limit:&#039;&#039;&#039; 0 (unbegrenzt)&lt;br /&gt;
* &#039;&#039;&#039;Local address extension character:&#039;&#039;&#039;  + (i.a. als default ausreichend)&lt;br /&gt;
* &#039;&#039;&#039;Internet protocols to use:&#039;&#039;&#039;  all (wenn man nicht explizit ipv4 oder ipv6 sprechen will)&lt;br /&gt;
&lt;br /&gt;
Diese Einstellungen lassen sich jederzeit mit&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # dpkg-reconfigure postfix&amp;lt;/console&amp;gt;&lt;br /&gt;
ändern.&lt;br /&gt;
&lt;br /&gt;
Alle relevanten Dateien befinden sich  im Verzeichnis &#039;&#039;/etc/postfix&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Bevor wir zur Postfixconfig kommen, überprüfen wir den Inhalt der Datei /etc/mailname:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 $ cat /etc/mailname&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Dort darf &#039;&#039;&#039;nur der Domainname&#039;&#039;&#039; stehen, nicht der Hostname (e.g. example.com). Andernfalls kann das Auswirkungen auf den Emailversand haben, v.a. wenn in der main.cf (s.u.) auf die Datei referenziert wird.&lt;br /&gt;
&lt;br /&gt;
Die wichtigeste Datei zum Anpassen ist zunächst die &#039;&#039;&#039;main.cf&#039;&#039;&#039;. Hier ein Beispiel für den Server „mx“ in der Domain example.de. Folgende Parameter sollten konfiguriert sein (exemplarisch):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 myhostname&lt;br /&gt;
 mydomain&lt;br /&gt;
 myorigin&lt;br /&gt;
 mydestination&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Meistens gibt es schon ein paar brauchbare defaults. Der Parameter &#039;&#039;mynetworks&#039;&#039; erlaubt es bestimmten Netzen, Emails ohne weitere Einschränkungen einzuliefern.&lt;br /&gt;
&lt;br /&gt;
Beispielconfig:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
myhostname = mx01.example.de &lt;br /&gt;
mydomain = example.de &lt;br /&gt;
myorigin = $mydomain &lt;br /&gt;
mydestination = $myhostname, localhost, localhost.$mydomain&lt;br /&gt;
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.99.0/24 [2001:780:11b::/48] 214.94.24.154 [2004:780:8:0:5ff0:c5ff:fe09:98f9]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vor allem der Parameter &#039;&#039;&#039;mynetworks&#039;&#039;&#039; sollte mit Bedacht gewählt werden, denn dies sind alle Hosts und Netze, von denen jegliche Emails angenommen werden (auch von &#039;&#039;und&#039;&#039; nach Fremddomains). Fehlkonfigurationen führen hier schnell zum [https://practical365.com/what-is-an-open-relay OpenRelay].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Beachte:&#039;&#039;&#039; ipv6-Adressen müssen in [eckigen Klammern] geschrieben werden, sonst werden keine Emails ausgeliefert!&lt;br /&gt;
Fehler im Logfile:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
postfix/smtpd[21928]: warning: smtpd_client_event_limit_exceptions: 2a01:40f8:c013:5050::0/64: table lookup problem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Nach jeder Änderung ist der Dienst zu reloaden mit dem Befehl&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ob der Restart ordentlich funktioniert hat, kann man z.B. anhand des Logs überprüfen:&lt;br /&gt;
&amp;lt;console&amp;gt;&lt;br /&gt;
 # tail /var/log/mail.log&lt;br /&gt;
&amp;lt;/console&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== master.cf ===&lt;br /&gt;
Das Kernstück des Postfixdaemons. Hier werden die Transports festgelegt&lt;br /&gt;
Bedeutung der Spalten:&lt;br /&gt;
* service-Feld: Name des Dienstes (smtp, local, procmail, ...) (str)&lt;br /&gt;
* typ-Feld: Verbindungstyp (inet, fifo, unix) (str)&lt;br /&gt;
* Zugriffsrecht: Zugriff auch für externe Programme (default: y) (bool)&lt;br /&gt;
* unpriv-Feld:  Start als unprivilegierter Benutzer (default: y) oder root (n) (bool)&lt;br /&gt;
* chroot: Soll der Dienst in einer chroot-Umgebung gestartet werden (default: y) (bool)&lt;br /&gt;
* wakeup-Feld: Sekunden zwischen 2 Aufrufen (default: 0) (int)&lt;br /&gt;
* Prozessmaximum: Wie viele Prozesse maximal gleichzeitig (default: 50) (int)&lt;br /&gt;
Danach erfolgt ein Kommando mit Flags und Parametern (optional).&lt;br /&gt;
&lt;br /&gt;
== TLS ==&lt;br /&gt;
Optional kann man mit Zertifikaten verschlüsselte Übertragung von Emails konfigurieren. das funktioniert aber nur dann, wenn der Mailserver der Gegenstelle das Zertifikat auch einbindet. Man kann das Zertifikat auch in einen Mailclient einbinden (s. später).&lt;br /&gt;
Die Zertifikatserzeugung kann mit mit [https://letsencrypt.org letsencrypt] erfolgen. Clients zur Zertifikatserzeugung sind [[Webserver_mit_Apache#Alternative_letsencrypt | certbot ]] oder [[Webserver_mit_Apache#Dehydrated | dehydrated]].&lt;br /&gt;
&lt;br /&gt;
=== Zertifikatsgenerierung in Kürze ===&lt;br /&gt;
 # echo $HOSTNAME &amp;gt; /etc/dehydrated/domains.txt&lt;br /&gt;
 # dehydrated –register –accept-terms&lt;br /&gt;
 # dehydrated -c&lt;br /&gt;
&lt;br /&gt;
Dies setzt allerdings einen [[Webserver_mit_Apache | Webserver]] voraus, der auf Port 80 lauscht. Gibt es diesen nicht, kann mal alternativ letsencrypt via DNS verwenden (https://letsencrypt.org/docs/challenge-types).&lt;br /&gt;
&lt;br /&gt;
=== Alternative eigene CA (nicht empfohlen) ===&lt;br /&gt;
&lt;br /&gt;
Wer unbedingt eine eigene CA betreiben will, kann das mit folgender Anleitung tun. Achtung: Das Vorgehen sollte nur gewählt werden, wenn ein zwingender Grund dafür besteht. Viele Browser und MUAs haben Probleme damit, erzeugen hässliche Warnings oder lassen die Seite nicht zu.&lt;br /&gt;
&lt;br /&gt;
==== CA erstellen ====&lt;br /&gt;
Wenn noch kein Zertifikat vorhanden ist, kann man sich selbst eines erstellen oder einen CSR (Certificate Signing Request) erstellen und diesen an eine offizielle CA schicken. Soll ein kommerziell genutzter Mailserver entstehen, ist dies der realistische Weg.&lt;br /&gt;
&lt;br /&gt;
Achtung: Dieser Schritt wird nicht gebraucht, wenn es schon eine CA gibt.&lt;br /&gt;
&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
Key erstellen (+Passwort dafür vergeben), Zertifkatsrequest für die CA erstellen, CA erstellen&lt;br /&gt;
&lt;br /&gt;
 $ openssl genrsa -out ca.key -des3 4096&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -x509 -days 3650 -key ca.key -out ca.crt&lt;br /&gt;
&lt;br /&gt;
 Enter pass phrase for ca.key: &lt;br /&gt;
 You are about to be asked to enter information that will be incorporated &lt;br /&gt;
 into your certificate request. &lt;br /&gt;
 What you are about to enter is what is called a Distinguished Name or a DN. &lt;br /&gt;
 There are quite a few fields but you can leave some blank &lt;br /&gt;
 For some fields there will be a default value, &lt;br /&gt;
 If you enter &#039;.&#039;, the field will be left blank. &lt;br /&gt;
 ----- &lt;br /&gt;
 Country Name (2 letter code) [AU]:DE &lt;br /&gt;
 State or Province Name (full name) [Some-State]:Bavaria &lt;br /&gt;
 Locality Name (eg, city) []:Nuernberg &lt;br /&gt;
 Organization Name (eg, company) [Internet Widgits Pty Ltd]:example.de &lt;br /&gt;
 Organizational Unit Name (eg, section) []:Hostmaster &lt;br /&gt;
 Common Name (e.g. server FQDN or YOUR name) []:*.example.de &lt;br /&gt;
 Email Address []:postmaster@example.de&lt;br /&gt;
&lt;br /&gt;
==== Zertifikat mit der neuen CA erstellen ====&lt;br /&gt;
Schritte:&lt;br /&gt;
    • Key erstellen&lt;br /&gt;
    • Request erstellen &lt;br /&gt;
    • Zertifikat erstellen und signen&lt;br /&gt;
&lt;br /&gt;
  $ openssl genrsa -out mx.example.de.key 4096&lt;br /&gt;
(kein Passwort festlegen)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl req -new -key mx.example.de.key -out mx.example.de.csr&lt;br /&gt;
(wieder das Formular ausfüllen as usual)&lt;br /&gt;
  &lt;br /&gt;
 $ openssl x509 -req -days 365 -in mx.example.de.csr -CA  ca.crt -CAkey ca.key -set_serial 01 -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
Beim Erneuern des Zertifikats fallen die Schritte „Erstellen der CA“ und Erzeugen des Keys weg. Ferner muss man auch keine Serial mehr angeben.&lt;br /&gt;
Der Renew-Befehl lautet also folgendermaßen:&lt;br /&gt;
 $ openssl x509 -req -days 730 -in mx.example.de.csr -CA ca.crt -CAkey ca.key -out mx.example.de.crt&lt;br /&gt;
&lt;br /&gt;
=== Einbinden in die Config-Datei ===&lt;br /&gt;
Dieser Schritt gilt wieder für alle Zertifikate, egal wie sie erzeugt wurden. Die Pfade müssen natürlich entsprechend angepasst werden,&lt;br /&gt;
&lt;br /&gt;
Zertifikate an die entsprechende Stelle kopieren und in der Konfig einbinden:&lt;br /&gt;
&lt;br /&gt;
 smtpd_tls_cert_file=/etc/ssl/certs/mx.example.de.crt &lt;br /&gt;
 smtpd_tls_key_file=/etc/ssl/private/mx.example.de.key &lt;br /&gt;
 smtpd_tls_CAfile=/etc/postfix/ca.crt &lt;br /&gt;
 smtpd_use_tls=yes &lt;br /&gt;
 smtpd_tls_session_cache_database = btree: &lt;br /&gt;
   ${data_directory}/smtpd_cache&lt;br /&gt;
 smtp_tls_session_cache_database = btree:  &lt;br /&gt;
   ${data_directory}/smtp_scache&lt;br /&gt;
&lt;br /&gt;
Der Parameter smtpd_tls_Cafile entfällt, wenn eine offizielle CA vorhanden ist (z.B. letsencrypt).&amp;lt;br&amp;gt;&lt;br /&gt;
Die Pfade zu den Zertifikaten können abweichen, bei letsencrypt liegen diese z.B. unter &#039;&#039;/var/lib/dehydrated/certs/&#039;&#039;.&lt;br /&gt;
Überprüfung:&lt;br /&gt;
 $ openssl s_client -connect mx.example.de:25 -starttls smtp&lt;br /&gt;
oder https://www.checktls.com/&lt;br /&gt;
&lt;br /&gt;
Ergänzung:&lt;br /&gt;
Manche MTAs wollen ein Zertifikat in pfx-Form. Dieses kann man aus dem cert file wie folgt erzeugen:&lt;br /&gt;
 $ openssl pkcs12 -export -out mx.example.de.pfx -inkey mx.example.de.key -in mx.example.de.crt&lt;br /&gt;
pfx-File und Passwort dem User zukommen lassen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
[[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]] „vergisst“ manchmal das neue Zertifikat und behält die alte Version, d.h. die meisten Mailclients spoolen dann keine neuen Emails mehr. In diesem Fall den Dovecot Service neu starten.&lt;br /&gt;
&lt;br /&gt;
== SASL ==&lt;br /&gt;
&lt;br /&gt;
Zur Vermeidung eines Open Relays ist dringend anzuraten, per default nur das Einliefern mit dem Absender @example.de von bestimmten Netzen zu erlauben. Dieses passiert mit dem Parameter &#039;&#039;mynetworks&#039;&#039; (s.o.).&lt;br /&gt;
Nun kann es natürlich passieren, dass Benutzer von einem Mailclient irgendwo im Internet Mails verschicken wollen. Diese wären laut Konfig nicht berechtigt. Da die meisten PCs mit dynamischen Adressen im Internet unterwegs sind, macht es hier auch keinen Sinn, die jeweilige IP-Adresse in der Konfig zu ergänzen.&amp;lt;br&amp;gt;&lt;br /&gt;
Das Problem kann umgangen werden, indem Emails versenden dann erlaubt wird, wenn sich der Benutzer einmal erfolgreich am IMAP-Server authentifiziert hat.&lt;br /&gt;
Hierfür gibt es SASL. Die entsprechenden Eintragungen in der main.cf sind:&lt;br /&gt;
 smtpd_relay_restrictions = permit_mynetworks  &lt;br /&gt;
 permit_sasl_authenticated defer_unauth_destination&lt;br /&gt;
 smtpd_sasl_auth_enable = yes &lt;br /&gt;
 smtpd_sasl_type = dovecot &lt;br /&gt;
 smtpd_sasl_local_domain = $mydomain &lt;br /&gt;
 smtpd_sasl_security_options = noanonymous &lt;br /&gt;
 smtpd_sasl_path = private/auth&lt;br /&gt;
&lt;br /&gt;
Im Mailclient äußert sich das Verhalten so, dass man beim ersten Senden einer Nachricht sein Mailbox-Passwort angeben muss.&lt;br /&gt;
Bevor dieses Feature aktiviert wird, muss es einen IMAP-Server geben (s.  [[#Dovecot|nächstes Kapitel]] ).&lt;br /&gt;
&lt;br /&gt;
== Maps ==&lt;br /&gt;
&lt;br /&gt;
Um besser unterscheiden zu können, was mit welchen Absender-/Zieladressen passiert, wird die Konfiguration in sogenannte „Maps“ aufgeteilt. Diese können als Klartext-File oder als Berkley DB vorliegen. In letzterem Fall müssen diese mit dem Kommando &#039;&#039;&#039;postmap&#039;&#039;&#039; nach jeder Bearbeitung umgewandelt werden.&amp;lt;br&amp;gt;&lt;br /&gt;
Ausnahme: Die Datei /etc/aliases.db (nur relevant für lokale Emailauslieferung) wird mit dem Kommando &#039;&#039;&#039;postalias&#039;&#039;&#039; oder &#039;&#039;&#039;newaliases&#039;&#039;&#039; generiert.&lt;br /&gt;
 alias_maps = hash:/etc/aliases &lt;br /&gt;
 alias_database = hash:/etc/aliases &lt;br /&gt;
Hier werden aliase eingerichtet, die auf eine andere Mailbox mappen. Beispiel:&lt;br /&gt;
 postmaster:    root&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
 smtpd_sender_restrictions = hash:/etc/postfix/access &lt;br /&gt;
Hier können für Aktionen für spezielle Absenderadressen eingerichtet werden. Beispiel:&lt;br /&gt;
 example.com     DISCARD&lt;br /&gt;
&lt;br /&gt;
=== Relocated ===&lt;br /&gt;
 relocated_maps = hash:/etc/postfix/relocated &lt;br /&gt;
&lt;br /&gt;
Abweisen der Mail mit einem Hinweis. Beispiel:&lt;br /&gt;
 testy.test      &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Ergebnis:&lt;br /&gt;
 &amp;lt;testy.test@example.de&amp;gt;: Recipient address rejected: User has &lt;br /&gt;
 moved to &amp;quot;Mails bitte statt an diese Adresse an ich@hier.de&lt;br /&gt;
 senden&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Canonical ===&lt;br /&gt;
==== Sender ====&lt;br /&gt;
 sender_canonical_maps = hash:/etc/postfix/sender_canonical &lt;br /&gt;
&lt;br /&gt;
Bestimmte Adressen werden auf ein übliches Standardformat umgeschrieben:&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   petra.sonne@example.de&lt;br /&gt;
 phun@work.de		   peter.hun@example.de&lt;br /&gt;
&lt;br /&gt;
==== Recipient ====&lt;br /&gt;
 recipient_canonical_maps = hash:/etc/postfix/recipient_canonical &lt;br /&gt;
Arbeitet genauso wie sender_canonical, nur für Empfängeradressen.&lt;br /&gt;
&lt;br /&gt;
=== Virtual Mailbox ===&lt;br /&gt;
 virtual_mailbox_maps = hash:/etc/postfix/virtual&lt;br /&gt;
Locations der Mailboxen des imap-Servers (näheres unter [[ Emailserver_mit_Postfix_und_Dovecot#Dovecot | Dovecot ]])&lt;br /&gt;
&lt;br /&gt;
 sunflower@example.de   	example.de/sunflower/ &lt;br /&gt;
 testy@example.de		example.de/testy &lt;br /&gt;
 test@example.de		example.de/test &lt;br /&gt;
 lmaa@ihr-koennt-mich-alle.de	ihr-koennt-mich-alle.de/lmaa&lt;br /&gt;
&lt;br /&gt;
=== Virtual Aliases ===&lt;br /&gt;
virtual_alias_maps = hash:/etc/postfix/virtual_maps&lt;br /&gt;
&lt;br /&gt;
Adressen die auf andere Adressen umgebogen werden (ähnlich wie die aliases), kann auch domainübergreifend passieren.&lt;br /&gt;
So können mehrere Empfängeradressen in dieselbe Mailbox laufen.&lt;br /&gt;
&lt;br /&gt;
 anrufbeantworter@example.de      sunflower@example.de,H.Hirsch@gmx.de,harry1999@yahoo.de&lt;br /&gt;
 info@example.de        sunflower@example.de&lt;br /&gt;
 postmaster    root &lt;br /&gt;
 webmaster     root &lt;br /&gt;
 administrator root&lt;br /&gt;
 root    sunflower&lt;br /&gt;
fortune:  fortune&lt;br /&gt;
Letzteres ist eine Pipe. Dazu später mehr.&lt;br /&gt;
&lt;br /&gt;
Umwandeln von Text in DB-File und in Postfix einlesen:&lt;br /&gt;
 postmap &amp;lt;aliases|access|canonical|...&amp;gt;&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Einfaches Beispiel: Emails von einer Domain auf eine andere weiterleiten ===&lt;br /&gt;
Nehmen wir an, wir haben einen Emailserver1 in der Domain example.com. Dieser soll alle Email die an &amp;lt;userXY&amp;gt;@example.com eintreffen, an &amp;lt;userXY&amp;gt;@example.de weiterleiten. Auch hier ist eine Eintrag in der o.g. &#039;&#039;virtual_maps&#039;&#039; Datei nötig:&lt;br /&gt;
 @example.com	@example.de&lt;br /&gt;
Nun werden alle example.com-Emails zum zuständigen Emailserver für example.de weitergeleitet. Der user part bleibt unverändert.&lt;br /&gt;
&lt;br /&gt;
=== Transports ===&lt;br /&gt;
Transports sind die Art und Weise, wie eine eingehende Mail behandelt wird, z.B. lokal in eine Datei speichern, an einen imap-Server weiterreichen oder ein Script ausführen.&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel: &amp;lt;br&amp;gt;&lt;br /&gt;
Wenn auf eine bestimmte Adresse geschickt wird, soll ein Script ausgeführt werden, das dem Absender einen Zufallsspruch zurücksendet &#039;&#039;&#039;und&#039;&#039;&#039; die Mail gleichzeitig in ein Postfach einliefert.&lt;br /&gt;
Schritte:&lt;br /&gt;
&lt;br /&gt;
1. Alias definieren (virtual_maps):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de           fortune&lt;br /&gt;
&lt;br /&gt;
2. Alias auf einen Transport  mappen (transports):&lt;br /&gt;
&lt;br /&gt;
 fortune@example.de     randomphrase:&lt;br /&gt;
&lt;br /&gt;
3. Transport definieren (master.cf):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 randomphrase      unix  - n n - - pipe &lt;br /&gt;
  flags=h user=vmail:vmail argv=/usr/local/bin/randomphrase.pl ${sender}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(Den User vmail muss es natürlich in der passwd geben, z.B. so:&lt;br /&gt;
 vmail:x:4000:4000::/home/vmail:/user/sbin/nologin&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
4. Script hinterlegen:&lt;br /&gt;
 /usr/local/bin/randomphrase.pl&lt;br /&gt;
für alle ausführbar machen&lt;br /&gt;
&lt;br /&gt;
Mit dem Script [[ randomphrase.pl ]] wird ein Zufallsspruch erzeugt. Dafür muss das Paket &#039;&#039;fortune-mod&#039;&#039; installiert sein. &lt;br /&gt;
Zum Weiterschicken der Email wird das Script /usr/local/bin/deliver_mail.sh aufgerufen. ([[File:Deliver_mail.sh]])&lt;br /&gt;
&amp;lt;br&amp;gt; Hierfür muss der User vmail in der Datei &#039;&#039;/etc/sudoers.d/vmail&#039;&#039; berechtigt werden:&lt;br /&gt;
 vmail ALL=(root) NOPASSWD: /usr/local/bin/deliver_mail&lt;br /&gt;
&lt;br /&gt;
Eine Email an die Adresse fortune@example.de erzeugt nun eine Antwort an die Absenderadresse mit einem Zufallsspruch.&lt;br /&gt;
&lt;br /&gt;
== Multidomain ==&lt;br /&gt;
&lt;br /&gt;
Natürlich kann Postfix auch Emails für mehrere Domains annehmen. Dafür gibt es den Parameter „virtual_mailbox_domains“:&lt;br /&gt;
&lt;br /&gt;
 virtual_mailbox_domains = example.de  example.com  ihr-koennt-mich-alle.de&lt;br /&gt;
Die Variable $mydomain sollte dann aus mydestination entfernt werden.&lt;br /&gt;
&lt;br /&gt;
== Special DNS Records ==&lt;br /&gt;
=== SPF (Sender Policy Framework) ===&lt;br /&gt;
Mit einem RR-Type TXT kann man eine Liste von Emailservern definieren, die als Absender die Emaildomain verwenden dürfen. Generiert jemand eine Fakeemail von einem anderen System aus, kann diese  abgewiesen werden.&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 IN TXT  &amp;quot;v=spf1 mx:example.de a:foo.example.de ip4:8.15.47.11/32 ip6:2008:15:5:47::11/48 ip6:2008:15:5:47::12/48 -all&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Howto: &amp;lt;br&amp;gt;&lt;br /&gt;
https://dmarcian.com/create-spf-record/ &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.open-spf.org/SPF_Record_Syntax/&lt;br /&gt;
&lt;br /&gt;
SPF in Postfix integrieren:&lt;br /&gt;
&lt;br /&gt;
Nun ist die Domain vor Missbrauch vor Fakeeemails geschützt. Jetzt gibt es aber noch die andere Seite zu beachten. Postfix soll ebenfalls die SPF-Records anderer Emaildomains prüfen und die Email ggf ablehnen.&lt;br /&gt;
https://makeityourway.de/enabling-spf-sender-policy-framework-checking-on-postfix/&lt;br /&gt;
&lt;br /&gt;
Hier in Kürze zusammengefasst, was es zu beachten gibt:&lt;br /&gt;
 # apt install postfix-policyd-spf-python&lt;br /&gt;
Die Config-Datei &#039;&#039;/etc/postfix-policyd-spf-python/policyd-spf.conf&#039;&#039; liefert bereits brauchbare Defaults, optional kann man noch eine Whitelist ergänzen z.B.&lt;br /&gt;
&lt;br /&gt;
 Domain_Whitelist = example.com &lt;br /&gt;
In der master.cf ergänzen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 policyd-spf  unix  -     n       n       -       -       spawn &lt;br /&gt;
  user=policyd-spf argv=/usr/bin/policyd-spf&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In der main.cf ergänzen:&lt;br /&gt;
 smtpd_recipient_restrictions =&lt;br /&gt;
 (...)&lt;br /&gt;
 check_policy_service unix:private/policyd-spf&lt;br /&gt;
 (…)&lt;br /&gt;
&#039;&#039;&#039;Achtung:&#039;&#039;&#039; Wenn es schon einen check_policy_service Eintrag gibt, &#039;&#039;&#039;keinesfalls&#039;&#039;&#039; einen weiteren Eintrag anhängen, sondern eine neue Zeile aufmachen!&lt;br /&gt;
 policy-spf_time_limit = 3600s&lt;br /&gt;
&lt;br /&gt;
 # postfix reload&lt;br /&gt;
&lt;br /&gt;
Ein paar Testemails einkippen und mail.log gucken.&lt;br /&gt;
&lt;br /&gt;
=== DMARC (Domain based Message Authentication, Reporting and Conformance) ===&lt;br /&gt;
https://dmarcian.com/dmarc-record/&lt;br /&gt;
&lt;br /&gt;
Beispiel für einen DNS TXT Record:&lt;br /&gt;
 _dmarc                  IN TXT      &amp;quot;v=DMARC1;p=quarantine;rua=mailto:postmaster@example.de&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In dem Fall werden verdächtige Emails in einen Quarantäne-Ordner verschoben und ein Report an den postmaster versandt.&lt;br /&gt;
Für die Integration in Postfix gibt es das Paket opendmarc.&lt;br /&gt;
Implementierung von SPF, DKIM und DMARC in Postfix:&lt;br /&gt;
&lt;br /&gt;
https://www.skelleton.net/2015/03/21/how-to-eliminate-spam-and-protect-your-name-with-dmarc/&lt;br /&gt;
(untested)&lt;br /&gt;
&lt;br /&gt;
== Nützliche Commands ==&lt;br /&gt;
Erzeugen eines database files aus einer Textdatei:&lt;br /&gt;
 postmap &amp;lt;filename&amp;gt;&lt;br /&gt;
Alle Configparameter anzeigen:&lt;br /&gt;
  postconf&lt;br /&gt;
Konfigprüfung: &lt;br /&gt;
  postfix check&lt;br /&gt;
Mailqueue anschauen: &lt;br /&gt;
  mailq&lt;br /&gt;
Alle Messages in der Queue ausliefern: &lt;br /&gt;
  postqueue -f&lt;br /&gt;
Nur eine bestimmte Message ausliefern: &lt;br /&gt;
  postqueue -i &amp;lt;ID&amp;gt;&lt;br /&gt;
Message löschen: &lt;br /&gt;
  postsuper -d &amp;lt;ID&amp;gt;&lt;br /&gt;
Alle Messages löschen (!): &lt;br /&gt;
  postsuper -d ALL&lt;br /&gt;
Inhalt einer Message anschauen: &lt;br /&gt;
  postcat -vq &amp;lt;ID&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Logfile ==&lt;br /&gt;
&lt;br /&gt;
Geloggt wird nach &#039;&#039;/var/log/mail.log&#039;&#039; (alles) bzw. Errors nach &#039;&#039;/var/log/mail.err&#039;&#039; und Warnings nach &#039;&#039;/var/log/mail.warn&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;Protipp: Alias  anlegen:&lt;br /&gt;
 maillog=&#039;tail -f /var/log/mail.log&#039;&lt;br /&gt;
&lt;br /&gt;
== Greylisting und Antispam ==&lt;br /&gt;
&lt;br /&gt;
Zur Bekämpfung der Spamflut gibt es das praktische Programm &#039;&#039;&#039;„Postgrey“&#039;&#039;&#039;. Unter Debian kann dieses als Paket installiert werden. Dieses wird in die main.cf im Abschnitt smtpd_recipient_restrictions eingebunden.&lt;br /&gt;
 smtpd_recipient_restrictions = &lt;br /&gt;
  permit_mynetworks&lt;br /&gt;
  permit_sasl_authenticated&lt;br /&gt;
  permit_tls_clientcerts&lt;br /&gt;
  reject_unauth_destination&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_sender&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_non_fqdn_recipient&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;reject_rbl_client bl.spamcop.net&#039;&#039;&#039;&lt;br /&gt;
  &#039;&#039;&#039;check_policy_service inet:127.0.0.1:10023&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
(Die Blacklist &#039;&#039;dnsbl.sorbs.net&#039;&#039; wurde hier außen vor gelassen, da diese so ziemlich alles blockt, z.B. alle yahoo- oder gmx-Adressen.)&lt;br /&gt;
Damit das funktioniert, muss natürlich noch Postgrey selbst an den Start gebracht werden.&lt;br /&gt;
Hierfür wird die Datei &#039;&#039;/etc/default/postgrey&#039;&#039; bearbeitet. Hier ein Beispiel:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;--inet=10023 --auto-whitelist-clients=8  &lt;br /&gt;
 POSTGREY_TEXT=&amp;quot;Busy. Come back in 5 minutes.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Der Service lauscht also auf Port 10023. Im obigen Beispiel wird ein Absender beim 8. erfolgreichen Zustellversuch automatisch gewhitelistet (optionaler Parameter &#039;&#039;--auto-whitelist-clients&#039;&#039;, evtl. Zahl erhöhen oder Parameter ganz weglassen).&lt;br /&gt;
&lt;br /&gt;
Anschließend wechselt man ins Verzeichnis &#039;&#039;/etc/postgrey&#039;&#039;. Dort gibt es 2 Whitelistings. Die Absender stehen in &#039;&#039;&#039;whitelist_clients&#039;&#039;&#039;. Dort stehen bereits IPs und Domains diverser Provider. Man kann dort selbst Einträge hinzufügen (z.B. example.ch).&lt;br /&gt;
&lt;br /&gt;
In der Datei &#039;&#039;&#039;whitelist_recipients&#039;&#039;&#039; kann man alle Empfänger der eigenen Domain eintragen, die auf jeden Fall immer Emails bekommen sollen. z.B. postmaster@, abuse@.&amp;lt;br&amp;gt;&lt;br /&gt;
Beachte: &#039;&#039;&#039;Die Dateien müssen explizit eingesourcet werden&#039;&#039;&#039;, passiert nicht automatisch. Das macht man mit den POSTGREY_OPTS:&lt;br /&gt;
 POSTGREY_OPTS=&amp;quot;$POSTGREY_OPTS --whitelist-clients=whitelist_clients --whitelist-recipients=/etc/postgrey/whitelist_recipients&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Nach getaner Anpassung, den postgrey-Service (neu)starten.&lt;br /&gt;
  # service postgrey restart&lt;br /&gt;
Überprüfen, ob der Dienst läuft z.B. mit:&lt;br /&gt;
 # lsof -i :10023&lt;br /&gt;
Anschließend Postfix reloaden&lt;br /&gt;
 # postfix reload&lt;br /&gt;
und die Mailbox(en) beobachten, hinsichtlich Spamaufkommen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Quelle: Artikel „Postzusteller“, Admin-Magazin, Ausgabe 03-2013)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Dovecot =&lt;br /&gt;
&lt;br /&gt;
Open Source IMAP-Server zum Einliefern der Emails in Postfächer mittels POP3 oder IMAP bzw. IMAPs. Im folgenden wird nur auf IMAPs eingegangen.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Es empfiehlt sich, den Dovecot auf demselben System zu installieren wie Postfix. Andere Fälle werden hier nicht berücksichtigt.&lt;br /&gt;
&lt;br /&gt;
Installation des imapd mittels&lt;br /&gt;
 # apt install dovecot-imapd&lt;br /&gt;
&lt;br /&gt;
Dies reicht für alle Grundfunktionen der Emailauslieferung. Für erweiterte Optionen wie z.B. Filterfunktion können weiter dovecot-Pakete wie  &#039;&#039;&#039;dovecot-antispam, dovecot-sieve&#039;&#039;&#039; installiert werden.&lt;br /&gt;
&lt;br /&gt;
User (i.d.F. &#039;&#039;vmail&#039;&#039;) als Owner für die Mailboxen anlegen:&lt;br /&gt;
&lt;br /&gt;
 useradd -u 4000 -m -d /home/vmail -s /user/sbin/nologin vmail&lt;br /&gt;
&lt;br /&gt;
== Konfiguration ==&lt;br /&gt;
&lt;br /&gt;
Configdateien in &#039;&#039;/etc/dovecot/conf.d&#039;&#039; anpassen.&lt;br /&gt;
Die Datei &#039;&#039;/etc/dovecot/dovecot.conf&#039;&#039; inkludiert per Default alle Dateien unter conf.d/*.conf.&lt;br /&gt;
&lt;br /&gt;
=== Usermanagement ===&lt;br /&gt;
&lt;br /&gt;
Hier ein Beispiel, wo User in einer separaten Datei abgelegt werden.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;10-auth.conf:&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
disable_plaintext_auth = no&lt;br /&gt;
auth_username_format = %n&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login &lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Wenn kein auth über pam:&lt;br /&gt;
 #!include auth-system.conf.ext&lt;br /&gt;
&lt;br /&gt;
Plaintext Auth kann man erlauben, weil die User-Passwörter als gehashter String übertragen werden. Für die Kommunikation zwischen Postfix und Dovecot spielt das ohnehin keine Rolle, da sich beide Dienste auf einem Server befinden. Der Zugriff von einem MUA aus wird über TLS/SSL erfolgen (s.u.).&lt;br /&gt;
&lt;br /&gt;
master user anlegen (optional):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p supergeheim -s SHA512-CRYPT -u administrator@example.de &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Den Output zusammen mit dem Usernamen in die Datei master-users pasten.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat  ../master-users     &lt;br /&gt;
administrator@example.de:{SHA256-CRYPT}$5$9zrt7/e2CDkPmSuA$SNEkm/L4XZcYFAbYkJp5ESl9u35fVBSd4ukO0dm5yp3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sonstige User anlegen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
doveadm pw -p strenggeheim -s SHA512-CRYPT -u sunflower@example.de&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
→ /etc/dovecot/users:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sunflower:{SHA256-CRYPT}$5$D3PhhtqUhRXT7cmZ$E5244BpvNafb.9FtbhF9AUfbvw8XpnOJhPyM/q/rRN2:::Sun Flo,,,:/var/mail/example.de/sunflower:/bin/false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Hier sollten keine Abkürzungen wie &#039;&#039;%d&#039;&#039; oder &#039;&#039;%n&#039;&#039; stehen, weil diese nicht (von sieve, s.u.) bzw. nur teilweise (von dovecot) interpretiert werden.&lt;br /&gt;
&lt;br /&gt;
Damit der Account auch Email bekomemn kann, ergänzt man die virtual table im Postfix directory:&lt;br /&gt;
  cat sunflower@example.de  example.de/spambucket &amp;gt;&amp;gt; /etc/postfix/virtual&lt;br /&gt;
Aktivieren mit&lt;br /&gt;
 postmap virtual&lt;br /&gt;
 postfix reload&lt;br /&gt;
&lt;br /&gt;
=== Dateirechte ===&lt;br /&gt;
&lt;br /&gt;
Die Files master-users, users sollten nur von dovecot gelesen werden können!&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# chgrp dovecot /etc/dovecot/*users&lt;br /&gt;
# chmod o-r /etc/dovecot/*users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Mailbox anlegen und User berechtigen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# maildirmake.dovecot /var/mail/&amp;lt;username&amp;gt;&lt;br /&gt;
# chown -R vmail.vmail /var/mail/&amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Achtung, der Parameter %d wird nicht als Domain gelesen, sondern bleibt leer!&amp;lt;br&amp;gt;&lt;br /&gt;
=&amp;gt; mail_location = maildir:/var/mail/%d/%n &amp;lt;br&amp;gt;&lt;br /&gt;
bedeutet, dass die Emails in /var/mail/&amp;lt;username&amp;gt; liegen. D.h. die Postfächer müssen direkt unter /var/mail angelegt werden.&lt;br /&gt;
&lt;br /&gt;
User im Postfix anlegen, in den virtual maps, s. o.&lt;br /&gt;
&lt;br /&gt;
Kontrolle:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# doveadm user &amp;lt;username&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== IMAP konfigurieren ===&lt;br /&gt;
Protipp: erstmal conf.d wegsichern:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rsync -av /etc/doveconf/conf.d /etc/doveconf/conf.d.orig&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Folgende Konfigurationsdateien in conf.d entsprechend anpassen:&lt;br /&gt;
* &#039;&#039;&#039;10-auth.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
disable_plaintext_auth = no&lt;br /&gt;
auth_username_format = %n&lt;br /&gt;
auth_master_user_separator = *&lt;br /&gt;
auth_mechanisms = plain login&lt;br /&gt;
&lt;br /&gt;
!include auth-master.conf.ext&lt;br /&gt;
!include auth-system.conf.ext&lt;br /&gt;
!include auth-passwdfile.conf.ext&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-mail.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mail_location = maildir:/var/mail/%d/%n &lt;br /&gt;
namespace inbox { &lt;br /&gt;
 inbox = yes &lt;br /&gt;
} &lt;br /&gt;
mail_uid = 4000 &lt;br /&gt;
mail_gid = 4000&lt;br /&gt;
mail_privileged_group = mail&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-master.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
service imap-login { &lt;br /&gt;
  inet_listener imaps { &lt;br /&gt;
   port = 993 &lt;br /&gt;
   ssl = yes &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service auth { &lt;br /&gt;
 unix_listener auth-userdb { &lt;br /&gt;
   user = vmail  &lt;br /&gt;
   group = vmail &lt;br /&gt;
 } &lt;br /&gt;
 unix_listener /var/spool/postfix/private/auth { &lt;br /&gt;
   mode = 0666 &lt;br /&gt;
   user = postfix &lt;br /&gt;
   group = postfix &lt;br /&gt;
 } &lt;br /&gt;
}&lt;br /&gt;
service stats { &lt;br /&gt;
	unix_listener stats-reader { &lt;br /&gt;
               user = vmail &lt;br /&gt;
               group = vmail &lt;br /&gt;
               mode = 0660 &lt;br /&gt;
     } &lt;br /&gt;
&lt;br /&gt;
	unix_listener stats-writer { &lt;br /&gt;
       user = vmail &lt;br /&gt;
       group = vmail &lt;br /&gt;
       mode = 0660 &lt;br /&gt;
   	} &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;10-ssl.conf&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# (z.B. Postfix certs verwenden)&lt;br /&gt;
ssl = yes &lt;br /&gt;
ssl_cert = &amp;lt;/etc/ssl/certs/mx.example.de.crt &lt;br /&gt;
ssl_key = &amp;lt;/etc/ssl/private/mx.example.de.key &lt;br /&gt;
ssl_client_ca_dir = /etc/ssl/certs &lt;br /&gt;
ssl_dh = &amp;lt;/usr/share/dovecot/dh.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Zertifikate generieren: s. https://wiki.nomorebluescreen.de/index.php?title=Webserver_mit_Apache#Alternative_letsencrypt&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spoiler:&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
Jedes Mal, wenn das Zertifikat ausgetauscht wird, muss der dovecot-Service neu gestartet werden, damit das neue Zertifikat auch eingelesen wird.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Überprüfen, welche Dateien angefasst wurden:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
diff -quw conf.d.orig conf.d&lt;br /&gt;
Files conf.d.orig/10-ssl.conf and conf.d/10-ssl.conf differ&lt;br /&gt;
Files conf.d.orig/15-lda.conf and conf.d/15-lda.conf differ&lt;br /&gt;
Files conf.d.orig/20-imap.conf and conf.d/20-imap.conf differ&lt;br /&gt;
Files conf.d.orig/20-managesieve.conf and conf.d/20-managesieve.conf differ&lt;br /&gt;
Files conf.d.orig/90-sieve.conf and conf.d/90-sieve.conf differ&lt;br /&gt;
Files conf.d.orig/auth-passwdfile.conf.ext and conf.d/auth-passwdfile.conf.ext differ&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ausgabe der gesamten Config&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
 # doveconf -n&lt;br /&gt;
&lt;br /&gt;
==== Sieve ====&lt;br /&gt;
Engine zum Filtern von Emails&lt;br /&gt;
&lt;br /&gt;
dovecot-sieve und dovecot-managesieved  installieren&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;15-lda.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lda_mailbox_autocreate = yes&lt;br /&gt;
lda_mailbox_autosubscribe = yes&lt;br /&gt;
protocol lda { &lt;br /&gt;
 mail_plugins = $mail_plugins sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;20-managesieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
protocols = $protocols sieve&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;90-sieve.conf:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
plugin { &lt;br /&gt;
 sieve = file:~/sieve;active=~/.dovecot.sieve &lt;br /&gt;
 sieve_default = /var/lib/dovecot/sieve/default.sieve &lt;br /&gt;
 sieve_global_dir = /var/lib/dovecot/sieve &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Kontrolle, ob der sieve-Service läuft und auf Port 4190 lauscht.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# service dovecot restart&lt;br /&gt;
# ss -plnt | grep 4190&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Da der User i.a. nicht direkt auf dem Emailserver sein /home mit den Sieve-Regeln editieren kann, erfolgt die weitere Konfiguration im Email-Client (s.u.).&lt;br /&gt;
&lt;br /&gt;
Achtung Bug:&amp;lt;br&amp;gt;&lt;br /&gt;
Da sieve/dovecot die Variable %n in der users-Datei nicht interpretiert, sollte man diese dort nicht verwenden. Somit kann es passieren, dass von roundcube ein Verzeichnis &#039;&#039;%n&#039;&#039; angelegt wird, in dem sich eine gemeinsame sieve config für &#039;&#039;&#039;alle&#039;&#039;&#039; User befindet.&lt;br /&gt;
&lt;br /&gt;
=== Transport von Postfix zu Dovecot ===&lt;br /&gt;
&lt;br /&gt;
Dem Postfix muss noch beigebracht werden, dass die Emails zum Dovecot gehen.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;master.cf&#039;&#039;&#039; im Postfix anpassen (die Einträge in den {} gehören so, nicht ersetzen!):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail    &lt;br /&gt;
 argv=/usr/lib/dovecot/deliver -a ${recipient} -f ${sender} -d $ &lt;br /&gt;
 {user} @${nexthop} -m ${extension}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
und einen mailbox_command Eintrag in der main.cf vornehmen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mailbox_command = /usr/lib/dovecot/dovecot-lda -f &amp;quot;$SENDER&amp;quot; -a &amp;quot;$RECIPIENT&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(https://doc.dovecot.org/configuration_manual/howto/dovecot_lda_postfix/#howto-dovecot-lda-postfix)&lt;br /&gt;
&lt;br /&gt;
Danach noch postfix und dovecot service restarten.&lt;br /&gt;
&lt;br /&gt;
== Logging ==&lt;br /&gt;
&lt;br /&gt;
Logfiles gehen ebenfalls (wie postfix) nach /var/log/mail.log &amp;lt;br&amp;gt;&lt;br /&gt;
Nützlicher Alias:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
maillog=&#039;tail -f /var/log/maillog&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Debugging einschalten:&lt;br /&gt;
 mail_debug = yes&lt;br /&gt;
in der Datei &lt;br /&gt;
&#039;&#039;10-logging.conf&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Protipp:&#039;&#039;&#039;&lt;br /&gt;
Wenn im Log folgender Fehler erscheint:&lt;br /&gt;
&lt;br /&gt;
 Mar 27 08:03:56 aphantopus postfix/pipe[2317]: 521066005D: to=&amp;lt;sunflower@example.de&amp;gt;, relay=dovecot, delay=0.3, delays=0.19/0.04/0/0.07, dsn=2.0.0, status=sent (delivered via dovecot service (lda(sunflower@example.de,)Error: net_connect_unix(/var/run/dovecot/stats-writer) failed: Permission denied))&lt;br /&gt;
&lt;br /&gt;
=&amp;gt; In der &#039;&#039;10-master.conf&#039;&#039; &#039;&#039;&#039;stats&#039;&#039;&#039; für User vmail erlauben (s.o.)&lt;br /&gt;
&lt;br /&gt;
= Roundcube =&lt;br /&gt;
&lt;br /&gt;
Praktisches Webfrontend zum Abholen und Verschicken von Emails&lt;br /&gt;
&lt;br /&gt;
Erst mysql-server installieren, sonst bricht die Installation mit einem Fehler ab&lt;br /&gt;
 # apt install mariadb-server roundcube&lt;br /&gt;
Die dbconfig-common Frage mit „yes“ beantworten, mysql-Passwort setzen.&lt;br /&gt;
Config Datei anpassen (&#039;&#039;/etc/roundcube/config.inc.php&#039;&#039;):&lt;br /&gt;
 $config[&#039;smtp_server&#039;] = &#039;localhost&#039;;&lt;br /&gt;
 $config[&#039;smtp_port&#039;] = 25;&lt;br /&gt;
&lt;br /&gt;
== Plugins ==&lt;br /&gt;
&lt;br /&gt;
Standard-Plugings installieren&lt;br /&gt;
 # apt install roundcube-plugins&lt;br /&gt;
&lt;br /&gt;
Weitere Plugins installieren:&lt;br /&gt;
 # apt install roundcube-plugins-extra git curl composer&lt;br /&gt;
(composer braucht man für die Installation von Plugins, git, weil die meisten aus github kommen)&amp;lt;br&amp;gt;&lt;br /&gt;
Die, die man haben will, in der Datei &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; enablen&lt;br /&gt;
&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
        &#039;compose-addressbook&#039;, &lt;br /&gt;
        &#039;markasjunk2&#039;, &lt;br /&gt;
        &#039;fail2ban&#039; &lt;br /&gt;
 );&lt;br /&gt;
&lt;br /&gt;
Übersicht über die offiziellen Plugins:&lt;br /&gt;
&lt;br /&gt;
https://plugins.roundcube.net/&lt;br /&gt;
&lt;br /&gt;
Plugins, die es nicht als Paket gibt:&amp;lt;br&amp;gt;&lt;br /&gt;
Schritte:&amp;lt;br&amp;gt;&lt;br /&gt;
* README lesen&lt;br /&gt;
* Plugin als zip herunterladen, nach &#039;&#039;/usr/share/roundcube/plugins&#039;&#039; entpacken&lt;br /&gt;
* (evtl. umbenennen)&lt;br /&gt;
* &#039;&#039;/etc/roundcube/config.inc.php&#039;&#039; bearbeiten:&lt;br /&gt;
Abschnitt&lt;br /&gt;
 $config[&#039;plugins&#039;] = array( &lt;br /&gt;
suchen und fehlendes Plugin ergänzen&lt;br /&gt;
&lt;br /&gt;
== Filter Plugin for Sieve ==&lt;br /&gt;
&lt;br /&gt;
Achtung, nicht das Plugin „filter“ verwenden, sondern &#039;&#039;&#039;managesieve&#039;&#039;&#039; (ist Bestandteil des roundcube-plugins Paketes)&lt;br /&gt;
&lt;br /&gt;
Eine Anleitung gibt es hier:&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-code-in-roundcube/ &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.pair.com/support/kb/how-to-add-sieve-filtering-in-roundcube/&lt;br /&gt;
&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Den protocols Parameter nicht in der dovecot.conf editieren, sondern in &lt;br /&gt;
&#039;&#039;20-managesieve.conf&#039;&#039; (s.o.):&lt;br /&gt;
&lt;br /&gt;
 protocols = $protocols sieve&lt;br /&gt;
&lt;br /&gt;
Nun kann man über das Webfrontend Sieve-Filterregeln generieren&lt;br /&gt;
&lt;br /&gt;
Achtung Bug:&amp;lt;br&amp;gt;&lt;br /&gt;
Sieve legt ein sieve-Verzeichnis unter dem Verzeichnis an, das in mail_location definiert ist. Wenn man die emails der User unter &#039;&#039;/var/mail/&amp;lt;domain&amp;gt;/&amp;lt;username&amp;gt;&#039;&#039; ablegen möchte, wird man folgendes konfigurieren:&lt;br /&gt;
&lt;br /&gt;
 mail_location = maildir:/var/mail/%d/%n&lt;br /&gt;
&lt;br /&gt;
Da dovecot aber %d nicht interpretiert (s.o.), liegt das User maildirectory unter /var/mail/&amp;lt;username&amp;gt;. Sieve interpretiert dagegen %n nicht und legt ein Directory /var/mail/&amp;lt;domain&amp;gt;/%n/sieve an, unter der die roundcube.sieve Datei liegt. Somit greifen alle User auf dieselbe Datei zu, was technisch möglich, securitytechnisch aber fatal ist. Leider keine gute Idee zur Abhilfe bekannt.&lt;br /&gt;
&lt;br /&gt;
== Passwort ändern ==&lt;br /&gt;
Um den Usern die Möglichkeit zu geben, ihr Passwort selbst zu ändern, wird in der &#039;&#039;config.inc.php&#039;&#039; das Plugin enabled:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$config[&#039;plugins&#039;] = array(&lt;br /&gt;
 (...)&lt;br /&gt;
 &#039;password&#039;&lt;br /&gt;
);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Weitere Einstellungen, wenn die User in einem Passwortfile gepflegt werden wie im Kapitel &#039;&#039;&#039;Dovecot&#039;&#039;&#039; beschrieben:&amp;lt;br&amp;gt;&lt;br /&gt;
(wir gehen davon aus, dass die Userpasswörter mit sha512 verschlüsselt werden, s.o.)&lt;br /&gt;
&lt;br /&gt;
 # https://stackoverflow.com/questions/62655236/how-to-enable-password-plugin-on-roundcube&lt;br /&gt;
 $config[&#039;password_algorithm&#039;] = &#039;ssha512&#039;;&lt;br /&gt;
 $config[&#039;password_algorithm_prefix&#039;] = &#039;{SSHA512}&#039;;&lt;br /&gt;
 $config[&#039;password_driver&#039;] = &#039;dovecot_passwdfile&#039;;&lt;br /&gt;
 $config[&#039;password_dovecot_passwdfile_path&#039;] = &#039;/etc/dovecot/users&#039;;&lt;br /&gt;
&lt;br /&gt;
Die users Datei vom dovecot muss dann entsprechend für www-data les- und schreibbar sein:&lt;br /&gt;
 -rw-rw---- 1 dovecot www-data 1240 Dec  2 23:20 /etc/dovecot/users&lt;br /&gt;
&lt;br /&gt;
(Achtung, riskant bei eventueller Kompromittierung des Webservers! Als Alternative überlegen, die dovecot-Passwörter in eine [mysql-]DB auszulagern)&lt;br /&gt;
&lt;br /&gt;
== Identities ändern ==&lt;br /&gt;
&lt;br /&gt;
Normalerweise kann ein User nur mit seiner Absenderadresse senden. Das ist eine sinnvolle Einstellung, aber wer das Feature zu Testzwecken abschalten will, kann folgende Einstellung vornehmen:&lt;br /&gt;
 $config[&#039;identities_level&#039;] = 0;&lt;br /&gt;
Nun kann der User über &amp;quot;Einstellungen&amp;quot; weitere Absender hinzufügen (https://www.servercake.blog/multiple-identities-roundcube/)&lt;br /&gt;
&lt;br /&gt;
(Leider bisher keine Möglichkeit gefunden, dies nur auf (einen) bestimmte(n) User einzuschränken)&lt;br /&gt;
&lt;br /&gt;
== Apache Integration ==&lt;br /&gt;
&lt;br /&gt;
Hier eine Beispielkonfiguration für einen Virtual Host, um die Roundcube-Seite unter https://mail.example.de zu erreichen.&lt;br /&gt;
Weiteres im Kapitel [[Webserver mit Apache|apache]]&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;VirtualHost *:443&amp;gt; &lt;br /&gt;
    ServerName mail.example.de &lt;br /&gt;
    ServerAdmin postmaster@example.de &lt;br /&gt;
&lt;br /&gt;
    SSLEngine on &lt;br /&gt;
    SSLCertificateFile         /var/lib/dehydrated/certs/mail.example.de/fullchain.pem &lt;br /&gt;
    SSLCertificateKeyFile /var/lib/dehydrated/certs/mail.example.de/privkey.pem &lt;br /&gt;
&lt;br /&gt;
    DocumentRoot /usr/share/roundcube &lt;br /&gt;
&lt;br /&gt;
    # Includes &lt;br /&gt;
    Include /etc/apache2/conf-available/ssl-encryption.conf &lt;br /&gt;
      &lt;br /&gt;
    LogFormat &amp;quot;%h %l %u %t \&amp;quot;%r\&amp;quot; %&amp;gt;s %b \&amp;quot;%{Referer}i\&amp;quot; \&amp;quot;%{User-agent}i\&amp;quot;&amp;quot; vhost_combined &lt;br /&gt;
    CustomLog /var/log/apache2/mail-ssl.log combined &lt;br /&gt;
    ErrorLog /var/log/apache2/mail-ssl-error.log &lt;br /&gt;
&amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Das roundcube-Paket bringt zudem noch eine roundcube.conf mit, die unter /etc/apache2/conf-available/roundcube.conf installiert und aktiviert wird.&lt;br /&gt;
&lt;br /&gt;
=== PHP ===&lt;br /&gt;
Damit der Roundcube überhaupt läuft, muss das php Plugin installiert und aktiviert sein. Passiert unter Debian mittels:&lt;br /&gt;
 # apt install libapache2-mod-php&lt;br /&gt;
&lt;br /&gt;
Etwas performanter ist die Verwendung von &#039;&#039;&#039;php-fpm&#039;&#039;&#039; (https://www.zend.com/blog/apache-phpfpm-modphp).&lt;br /&gt;
 # apt install php-fpm&lt;br /&gt;
&lt;br /&gt;
Der default Upload bei PHP sind dürftige 2 MB. Um diesen z.B. auf 50MB raufzudrehen, muss folgende Datei angefasst werden:&lt;br /&gt;
&amp;lt;pre&amp;gt;VERSION=$(php -v| head -n 1 | awk &#039;{ print $2 }&#039; | sed -e &#039;s|.[[:digit:]]*$||&#039;)&amp;lt;/pre&amp;gt;&lt;br /&gt;
* modphp:&lt;br /&gt;
 /etc/php/${VERSION}/apache2/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
* php-fpm:&lt;br /&gt;
 /etc/php/${VERSION}/fpm/php.ini&lt;br /&gt;
  upload_max_filesize = 50M&lt;br /&gt;
&lt;br /&gt;
= Integration in einen MUA =&lt;br /&gt;
Wer nicht über den (langsamen) Webmailer gehen will, kann natürlich auch einen MUA seiner Wahl verwenden. Hier ein Beispiel.&lt;br /&gt;
&lt;br /&gt;
== Thunderbird==&lt;br /&gt;
&lt;br /&gt;
Einstellungen für Outgoing Server (SMTP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Servername: FQDN des Email-Servers&lt;br /&gt;
Port: 25&lt;br /&gt;
Connection Security: STARTTLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Beim 1. Mal wird man nach seinem Mailbox-Passwort gefragt. Dieses eingeben und speichern.&lt;br /&gt;
&lt;br /&gt;
Server Settings (IMAP)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Server Type: IMAP Mail Server&lt;br /&gt;
Server Name: FQDN des Email-Servers&lt;br /&gt;
Port: 993&lt;br /&gt;
Username: Name des Mailbox-Users&lt;br /&gt;
Connection Security: SSL/TLS&lt;br /&gt;
Authentication Method: Normal Password&lt;br /&gt;
(Die restlichen Defaults so belassen oder bei Bedarf anpassen)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot thunderbird1.png|900px]]&lt;br /&gt;
[[File:Screenshot thunderbird2.png|900px]]&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=667</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=667"/>
		<updated>2026-02-16T17:12:39Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Deployments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
updated for version: 4.19&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
(in common)&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
= Nodes =&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Draining nodes ==&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary when you drain it s. below - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Machines =&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Machinesets ==&lt;br /&gt;
&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
== Delete and re-create machines/nodes ==&lt;br /&gt;
 oc get machines -A | grep worker-&amp;lt;XY&amp;gt; | wc -l&lt;br /&gt;
-&amp;gt; MACHINECOUNT&lt;br /&gt;
 oc annotate machine/&amp;lt;machine-name&amp;gt; -n openshift-machine-api machine.openshift.io/delete-machine=&amp;quot;true&amp;quot;&lt;br /&gt;
 oc scale --replicas=&amp;lt;$MACHINECOUNT+1&amp;gt; machineset &amp;lt;machineset&amp;gt; n openshift-machine-api&lt;br /&gt;
 oc scale --replicas=$MACHINECOUNT machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
= Projects/Namespaces =&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com &lt;br /&gt;
* registry.redhat.io  (with login only)&lt;br /&gt;
* quay.io&lt;br /&gt;
* docker.io&lt;br /&gt;
&lt;br /&gt;
= Images =&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
  skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
  skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
  skopeo inspect --config docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215  docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
== Create pod from image ==&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
= Apps =&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
= Deployments =&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
== Environment variables ==&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1  MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb --from=secret/my-secret (--prefix=MYSQL_)&lt;br /&gt;
&lt;br /&gt;
== Restart deployment after change ==&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt; /var/www/html &amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make deployment available from inside/outside ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Add probes ==&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
== Autoscale Pods ==&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
== Reduce/Upgrade cpu/mem requests ==&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
== Security ==&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Pods =&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
= Jobs and Cronjobs =&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Secrets =&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
=== Update Secret ===&lt;br /&gt;
 $ oc set data secret/&amp;lt;mysecret&amp;gt; --from-file /tmp/root-password&lt;br /&gt;
&lt;br /&gt;
=== Extract secret ===&lt;br /&gt;
 $ oc extract secret /&amp;lt;mysecret&amp;gt; --to /tmp/mysecret (--confirm)&lt;br /&gt;
&lt;br /&gt;
= Configmaps =&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets (REVIEW!)&lt;br /&gt;
 $ oc get egressips&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Changes with &#039;&#039;&#039;patch&#039;&#039;&#039; command =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=666</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=666"/>
		<updated>2026-02-14T09:20:06Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
updated for version: 4.19&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
(in common)&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
= Nodes =&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Draining nodes ==&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary when you drain it s. below - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Machines =&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Machinesets ==&lt;br /&gt;
&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
== Delete and re-create machines/nodes ==&lt;br /&gt;
 oc get machines -A | grep worker-&amp;lt;XY&amp;gt; | wc -l&lt;br /&gt;
-&amp;gt; MACHINECOUNT&lt;br /&gt;
 oc annotate machine/&amp;lt;machine-name&amp;gt; -n openshift-machine-api machine.openshift.io/delete-machine=&amp;quot;true&amp;quot;&lt;br /&gt;
 oc scale --replicas=&amp;lt;$MACHINECOUNT+1&amp;gt; machineset &amp;lt;machineset&amp;gt; n openshift-machine-api&lt;br /&gt;
 oc scale --replicas=$MACHINECOUNT machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
= Projects/Namespaces =&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com &lt;br /&gt;
* registry.redhat.io  (with login only)&lt;br /&gt;
* quay.io&lt;br /&gt;
* docker.io&lt;br /&gt;
&lt;br /&gt;
= Images =&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
  skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
  skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
  skopeo inspect --config docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215  docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
== Create pod from image ==&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
= Apps =&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
= Deployments =&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
== Environment variables ==&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1  MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb --from=secret/my-secret (--prefix=MYSQL_)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt; /var/www/html &amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make deployment available from inside/outside ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Add probes ==&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
== Autoscale Pods ==&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
== Reduce/Upgrade cpu/mem requests ==&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
== Security ==&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Pods =&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
= Jobs and Cronjobs =&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Secrets =&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
=== Update Secret ===&lt;br /&gt;
 $ oc set data secret/&amp;lt;mysecret&amp;gt; --from-file /tmp/root-password&lt;br /&gt;
&lt;br /&gt;
=== Extract secret ===&lt;br /&gt;
 $ oc extract secret /&amp;lt;mysecret&amp;gt; --to /tmp/mysecret (--confirm)&lt;br /&gt;
&lt;br /&gt;
= Configmaps =&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets (REVIEW!)&lt;br /&gt;
 $ oc get egressips&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Changes with &#039;&#039;&#039;patch&#039;&#039;&#039; command =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=665</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=665"/>
		<updated>2026-02-06T21:17:54Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Set environment variables afterwards */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb --from=secret/my-secret (--prefix=MYSQL_)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
=== Update Secret ===&lt;br /&gt;
 $ oc set data secret/&amp;lt;mysecret&amp;gt; --from-file /tmp/root-password&lt;br /&gt;
&lt;br /&gt;
=== Extract secret ===&lt;br /&gt;
 $ oc extract secret /&amp;lt;mysecret&amp;gt; --to /tmp/mysecret (--confirm)&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt; /var/www/html &amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=664</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=664"/>
		<updated>2026-02-06T20:31:12Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Set environment variables afterwards */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
 oc set env deployment/mariadb --from=secret/my-secret&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
=== Update Secret ===&lt;br /&gt;
 $ oc set data secret/&amp;lt;mysecret&amp;gt; --from-file /tmp/root-password&lt;br /&gt;
&lt;br /&gt;
=== Extract secret ===&lt;br /&gt;
 $ oc extract secret /&amp;lt;mysecret&amp;gt; --to /tmp/mysecret (--confirm)&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt; /var/www/html &amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=663</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=663"/>
		<updated>2026-02-06T11:39:23Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* configmap */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
=== Update Secret ===&lt;br /&gt;
 $ oc set data secret/&amp;lt;mysecret&amp;gt; --from-file /tmp/root-password&lt;br /&gt;
&lt;br /&gt;
=== Extract secret ===&lt;br /&gt;
 $ oc extract secret /&amp;lt;mysecret&amp;gt; --to /tmp/mysecret (--confirm)&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt; /var/www/html &amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=662</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=662"/>
		<updated>2026-01-25T09:50:17Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Create Secret */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
=== Update Secret ===&lt;br /&gt;
 $ oc set data secret/&amp;lt;mysecret&amp;gt; --from-file /tmp/root-password&lt;br /&gt;
&lt;br /&gt;
=== Extract secret ===&lt;br /&gt;
 $ oc extract secret /&amp;lt;mysecret&amp;gt; --to /tmp/mysecret (--confirm)&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=661</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=661"/>
		<updated>2026-01-25T09:32:41Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* from file */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;sshkeys&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=660</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=660"/>
		<updated>2026-01-25T09:32:01Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Create Secret */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;mysecret&amp;gt; --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls &amp;lt;secret-tls&amp;gt; --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=659</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=659"/>
		<updated>2026-01-25T09:31:07Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* from String */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic &amp;lt;test&amp;gt; --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls secret-tls --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=658</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=658"/>
		<updated>2026-01-25T09:29:36Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Create Secret from String */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret ==&lt;br /&gt;
=== from String ===&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
=== from file ===&lt;br /&gt;
 $ oc create secret generic --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub&lt;br /&gt;
&lt;br /&gt;
=== as TLS secret ===&lt;br /&gt;
 $ oc create secret tls secret-tls --cert /tmp/mydomain.crt --key /tmp/mydomain.key&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Misc_(multimedia)&amp;diff=657</id>
		<title>Misc (multimedia)</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Misc_(multimedia)&amp;diff=657"/>
		<updated>2025-12-19T21:31:39Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Musik-CD brennen */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Fotos vom Smartphone übertragen ==&lt;br /&gt;
* Geht schnell und stressfrei mit &#039;&#039;&#039;jmtpfs&#039;&#039;&#039;. Dieses lässt sich als Paket installieren.&amp;lt;br&amp;gt;&lt;br /&gt;
* Das Smartphone an den Micro-USB-Port anschließen  und die Art der Datenübertragung wählen:&lt;br /&gt;
 &amp;quot;Daten übertragen (mtp)&amp;quot;&lt;br /&gt;
* Daten z.B. nach /mnt mounten:&lt;br /&gt;
 jmtpfs /mnt&lt;br /&gt;
und von dort kopieren&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Sollte unter KDE folgender Fehler auftreten&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Listing raw device(s)&lt;br /&gt;
Device 0 (VID=...) is a Samsung Galaxy models (MTP).&lt;br /&gt;
Found 1 device(s):&lt;br /&gt;
Samsung: Galaxy models (MTP) (...) @ bus 1, dev 10&lt;br /&gt;
Attempting to connect device(s)&lt;br /&gt;
libusb_claim_interface() reports device is busy, likely in use by GVFS or KDE MTP device handling already&lt;br /&gt;
LIBMTP PANIC: Unable to initialize device&lt;br /&gt;
Unable to open raw device 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Kann man folgendes versuchen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 killall kiod5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
bzw.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
killall gvfs-udisk2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
unter xfce.&lt;br /&gt;
&lt;br /&gt;
Leider ist es unter xfce offenbar nicht mehr möglich mit o.g. Tools ein Smartphone zu mounten. Eine andere Möglichkeit ist der Android Debugger &#039;&#039;&#039;adb&#039;&#039;&#039;.&amp;lt;br&amp;gt;&lt;br /&gt;
Installieren und sich den Inhalt der Speicherkarte ansehen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo apt install adb&lt;br /&gt;
adb devices&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Es sollte mindestens ein Device angezeigt werden)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb shell&lt;br /&gt;
ls /sdcard/DCIM&lt;br /&gt;
strg+d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Außerdem muss man an seinem Android Gerät noch ein paar Einstellungen vornehmen:&lt;br /&gt;
* Developer Mode einschalten (7x auf die Serial tippen)&lt;br /&gt;
* USB Debugging einschalten (unter den Developer Options)&lt;br /&gt;
* Nach dem Einstecken des USB-Kabels Data Transfer wählen (anstatt nur Aufladen)&lt;br /&gt;
&lt;br /&gt;
Eine ausführliche Anleitung für diese Schritte gibt es hier:&amp;lt;br&amp;gt;&lt;br /&gt;
https://droidwin.com/fix-adb-device-not-found-error/&lt;br /&gt;
&lt;br /&gt;
Mit &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb pull &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
lassen sich die Dateien nun herunterziehen.&lt;br /&gt;
&lt;br /&gt;
Man kann der adb Shell auch Argumente mitgeben z.B.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb shell ls /sdcard/DCIM/Camera&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Annotations ===&lt;br /&gt;
* Unter xfce4 kann man Fotos am besten mit jmtpfs herunterladen. Davor muss man konkurierende Prozesse mit Zugriff auf die Kamera in einer Endlosschleife abschießen&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
while sleep 0.1; do pkill kiod5; pkill gvfs-udisks2; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
und in einer anderen Shell das Device mounten:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /tmp/cam&lt;br /&gt;
jmtpfs /tmp/cam&lt;br /&gt;
Device 0 (VID=18d1 and PID=4ee2) is a Google Inc Nexus/Pixel (MTP+ADB).&lt;br /&gt;
Android device detected, assigning default bug flags&lt;br /&gt;
ls /tmp/cam/Interner\ gemeinsamer\ Speicher/DCIM/&lt;br /&gt;
Camera  PeakLens&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Unter kde kann man sich mir dem adb (s.o.) behelfen, indem man kiod+gvfs in einer Endlosschleife abschießt (s.o.):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 while sleep 0.1; do pkill kiod5; pkill gvfs-udisks2; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Zu viele Fotos auf einmal herunterladen, bricht irgendwann ab. Hier ein paar Commands, wie man eine Auswahl heruntergeladen bekommt (e.g. Jan. 2024):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 adb shell ls /sdcard/DCIM/Camera|grep IMG_2024 &amp;gt;/tmp/img.txt &lt;br /&gt;
 for IMG in $(grep 202401 /tmp/img.txt) ; do adb pull /sdcard/DCIM/Camera/$IMG CAM/; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Bootbaren USB-Stick erstellen ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Voraussetzung:&#039;&#039;&#039; Ein Ordner namens &#039;&#039;ISO&#039;&#039;, in dem sich bootbare Dateien (z.B. für ein Windows-Image) befinden&lt;br /&gt;
&lt;br /&gt;
Ein ISO-Image lässt sich sehr einfach mit dem Tool &#039;&#039;&#039;xorrisofs&#039;&#039;&#039; erstellen.&lt;br /&gt;
&amp;lt;br&amp;gt;Zunächst muss dieses installiert werden&lt;br /&gt;
 $ sudo apt install xorriso&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das Erzeugen des ISO geht dann wie folgt:&lt;br /&gt;
 $ xorrisofs -r -J -o ./windows10.iso ./ISO&lt;br /&gt;
&lt;br /&gt;
Das ISO-Image kann man dann mit &amp;quot;dd&amp;quot; auf den USB-Stick kopiert werden. Je nachdem, um welches Device es sich handelt, kann das dann so aussehen:&lt;br /&gt;
 # dd if=windows.iso of=/dev/sde&lt;br /&gt;
Achtung: &#039;&#039;&#039;Keine&#039;&#039;&#039; Partition erstellen/angeben !&lt;br /&gt;
&lt;br /&gt;
==Tracks aus Musik-CD extrahieren==&lt;br /&gt;
&lt;br /&gt;
 $  icedax -D &amp;lt;devicename&amp;gt; -t 1 song1.wav&lt;br /&gt;
also z.B.&lt;br /&gt;
 $ icedax -D /dev/cdrom -t 2 song2.wav&lt;br /&gt;
 $ icedax -D /dev/cdrom0 -t 3+4 mixed.wav&lt;br /&gt;
(macht aus Nr. 3 u. 4 eine große Datei)&lt;br /&gt;
&lt;br /&gt;
Wer lieber mp3s hat, kann &#039;&#039;&#039;cdda2mp3&#039;&#039;&#039; verwenden.&lt;br /&gt;
&lt;br /&gt;
==Musik-CD brennen==&lt;br /&gt;
Für eine Musik-CD wird KEIN Isoimge erzeugt!&amp;lt;br&amp;gt;&lt;br /&gt;
Stattdessen alle wav-Dateien in einem Ordner speichen, dort hineinwechseln und&lt;br /&gt;
ausführen:&lt;br /&gt;
 # cdrecord -dao *.wav&lt;br /&gt;
&lt;br /&gt;
cdrecord wird in neueren Debian-Releases durch &#039;&#039;wodim&#039;&#039; ersetzt. Beispiel:&lt;br /&gt;
&lt;br /&gt;
Daten:&lt;br /&gt;
 $ /usr/bin/wodim -v gracetime=2 dev=/dev/sr0 speed=4 -sao driveropts=burnfree -data&lt;br /&gt;
Musik:&lt;br /&gt;
 $ wodim -v gracetime=2 dev=/dev/cdrom speed=8 -dao -pad driveropts=burnfree *&lt;br /&gt;
&lt;br /&gt;
Leider ist wodim etwas heikel mit den Frequenzen. Wenn eine wav Datei nicht 44100 Hz hat, wird sie nicht gebrannt. Ein Beispiel, wie man sie umwandeln kann:&lt;br /&gt;
 $  ffmpeg -i &amp;quot;INFILE&amp;quot; -vn -ac 2 -ar 44100 &amp;lt;OUTFILE.wav&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==DVD brennen==&lt;br /&gt;
 # growisofs -R -J -Z /dev/dvd /path/to/file-or-directory&lt;br /&gt;
&lt;br /&gt;
wobei -R=Rockridge, -J=Joliet und -Z=device (kann von System zu System variieren). Wer sich nicht sicher ist, kann mit -dry-run den Brennvorgang erstmal simulieren.&amp;lt;br&amp;gt;&lt;br /&gt;
Eine sehr schöne Übersicht über DVDs allg. und unter Linux gibt es hier&amp;lt;br&amp;gt;&lt;br /&gt;
http://www.rakekniven.de/linux/k-base/dvd-brennen.php&lt;br /&gt;
&lt;br /&gt;
== Von DVD extrahieren ==&lt;br /&gt;
Als brauchbar für die Erstellung eines mp4 von DVD hat sich im Test handbrake herausgestellt. Lässt sich als Paket installieren. Nur der Sound kommt etwas leise. Dieser lässt sich mit [[#ffmpeg|ffmpeg]] aufbessern (s.u.).&lt;br /&gt;
&lt;br /&gt;
== Monitor kalibrieren ==&lt;br /&gt;
&lt;br /&gt;
Sehr empfehlenswert ist das Tool DisplayCAL [https://displaycal.net/], das gängige Kalibrierungsgeräte wie z.B. Spyder, unterstützt.&amp;lt;br&amp;gt;&lt;br /&gt;
Verwendete Hardware:&amp;lt;br&amp;gt;&lt;br /&gt;
[https://www.dpreview.com/articles/3856869836/spyder3 Spyder 3]&lt;br /&gt;
&lt;br /&gt;
Hier gibt es eine umfassende Anleitung:&lt;br /&gt;
&lt;br /&gt;
https://www.reallinuxuser.com/how-to-color-calibrate-your-monitor-in-linux/&lt;br /&gt;
&lt;br /&gt;
=== Anmerkung ===&lt;br /&gt;
&lt;br /&gt;
Die Originalversion benutzt Python2 und wird daher auf den gängigen Linux-Distributionen nicht mehr unterstützt. Es gibt aber ein Nachfolgeprojekt: https://github.com/eoyilmaz/displaycal-py3&lt;br /&gt;
und darauf basierend ein Debianpaket:&lt;br /&gt;
 $ sudo apt install displaycal&lt;br /&gt;
&lt;br /&gt;
Unten genanntes wurde mit der alten Version und python2 getestet.&lt;br /&gt;
&lt;br /&gt;
=== Protipp ===&lt;br /&gt;
&lt;br /&gt;
* Wer keine guten/farbstichige Ergebnisse erzielt, kann als Whitepoint 5300K versuchen.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot 2021-02-23 19-55-44.png|800px]]&lt;br /&gt;
&lt;br /&gt;
* Am Ende wird ein &#039;&#039;&#039;Profil&#039;&#039;&#039; (.icc Datei) erzeugt. Laden lässt sich dieses mit xcalib:&lt;br /&gt;
&lt;br /&gt;
 $ xcalib &amp;lt;profilname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(und ein bisschen Theorie zu den konfigurierbaren Parametern:&lt;br /&gt;
&lt;br /&gt;
https://displaycal.net/#colorimeter-corrections )&lt;br /&gt;
&lt;br /&gt;
== ffmpeg ==&lt;br /&gt;
&#039;&#039;&#039;Das&#039;&#039;&#039; Tool zum Filme (und Sound) bearbeiten. Hier ein paar Beispiele:&lt;br /&gt;
&lt;br /&gt;
* Film um 180° drehen:&lt;br /&gt;
 $ ffmpeg -i input.mp4 -vf &amp;quot;transpose=2,transpose=2&amp;quot; output.mp4&lt;br /&gt;
&lt;br /&gt;
* Sound entfernen:&lt;br /&gt;
 $ ffmpeg -i MVI_1747.MOV -vcodec copy -an MVI_1747_mute.MOV&lt;br /&gt;
* Sound extrahieren (als mp3 abspeichern):&lt;br /&gt;
 # ffmpeg -i MVI_1751.MOV MVI_1751.mp3&lt;br /&gt;
* Lautstärke verändern (z.B. 1.5fach)&lt;br /&gt;
 # ffmpeg -i input.wav -af &amp;quot;volume=1.5&amp;quot; output.wav&lt;br /&gt;
&lt;br /&gt;
* Ab Position n den Rest abschneiden:&lt;br /&gt;
 #  ffmpeg -i MVI_1752.mp3 -to 00:00:27  MVI_1752_short.mp3&lt;br /&gt;
&lt;br /&gt;
* Ausschnitt rausschneiden und gleichzeitig als mp3 abspeichern&lt;br /&gt;
 # ffmpeg -i Buena_Vista_Social_Club.mp4 -ss 00:04:16 -to 00:09:16 bvsc2.mp3&lt;br /&gt;
&lt;br /&gt;
* Audio zu Video zufügen:&lt;br /&gt;
 # ffmpeg -i MVI_1747_mute.MOV -i MVI_1752a.mp3 -codec copy -shortest MVI_1747_sound.MOV&lt;br /&gt;
&lt;br /&gt;
* 2 Sounddateien mergen:&lt;br /&gt;
 #  ffmpeg -i &amp;quot;concat:MVI_1751.mp3|MVI_1753.mp3&amp;quot; -c copy MVI_1752.mp3&lt;br /&gt;
(Bei größeren Dateimengen ist sox eine gute Alternative:&amp;lt;br&amp;gt;&lt;br /&gt;
 # sox file1.wav file2.wav ... filen.wav newfile.wav&lt;br /&gt;
)&lt;br /&gt;
* 2 Videodateien mergen&lt;br /&gt;
Falls auf Grund unterschiedlicher Codierung sich das obige Beispiel nicht aus Videos übertragen lässt:&lt;br /&gt;
 $ ffmpeg -ifile1.mp4 -i file2.mp4 -filter_complex &amp;quot;[0:v][0:a][1:v][1:a] concat=n=2:v=1:a=1 [outv] [outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; out.mp4&lt;br /&gt;
&lt;br /&gt;
Beispiel für das Zusammenmergen von 3 Videos incl. Sound:&lt;br /&gt;
 $ ffmpeg -i VID_20241225_part1.mp4 -i VID_20241225_part2.mp4 -i VID_20241225_142603~2.mp4 -filter_complex &amp;quot;[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0] concat=n=3 :v=1:a=1[outv][outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; output.mp4&lt;br /&gt;
&lt;br /&gt;
Beispiel für das Zusammenmergen von 4 Videos zu 1 großen:&lt;br /&gt;
 $ ffmpeg -i part1.mp4 -i part2.mp4 -i MVI_0309.MP4 -i MVI_0310.MP4 -filter_complex &amp;quot;[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0][3:v:0][3:a:0]concat=n=4:v=1:a=1[outv][outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; bigmovie.mp4&lt;br /&gt;
&lt;br /&gt;
Alternative:&amp;lt;br&amp;gt;&lt;br /&gt;
Datei erstellen mit dem Inhalt der Videodateien:&lt;br /&gt;
 cat &amp;gt; myfile.txt&amp;lt;&amp;lt;EOF&lt;br /&gt;
 file &#039;MVI_1547.mp4&#039;&lt;br /&gt;
 file &#039;MVI_1610.mp4&#039;&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -f concat -i myfile.txt -c copy output.mp4&lt;br /&gt;
&lt;br /&gt;
Sollte das Video nicht sauber durchlaufen, mit avidemux öffnen und die Schnittstelle rausschneiden.&lt;br /&gt;
&lt;br /&gt;
(wer noch immer nicht genug hat: https://trac.ffmpeg.org/wiki/Concatenate )&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Beispiel für 2 Videodateien zusammenmergen:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 F1=MVI_1869.MOV &lt;br /&gt;
 F2=MVI_1870.MOV&lt;br /&gt;
 RESULT=MVI_1871.mp4&lt;br /&gt;
 &lt;br /&gt;
 # sound&lt;br /&gt;
 #ffmpeg -i $F1 intermediate.mp3&lt;br /&gt;
 #ffmpeg -i $F2 intermediate.mp3&lt;br /&gt;
 &lt;br /&gt;
 # movie ohne sound&lt;br /&gt;
 # ffmpeg -i $F1 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts&lt;br /&gt;
 # ffmpeg -i $F2 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts&lt;br /&gt;
 # ffmpeg -f mpegts -i &amp;quot;concat:intermediate1.ts|intermediate2.ts&amp;quot; -c copy -bsf:a aac_adtstoasc $RESULT &lt;br /&gt;
 # rm intermediate{1,2}.ts&lt;br /&gt;
 &lt;br /&gt;
 # movie mit Sound&lt;br /&gt;
 ffmpeg -i $F1 -i $F2 \&lt;br /&gt;
  -filter_complex &amp;quot;[0:v:0] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [v] [a]&amp;quot; \&lt;br /&gt;
  -map &amp;quot;[v]&amp;quot; -map &amp;quot;[a]&amp;quot; $RESULT&lt;br /&gt;
&lt;br /&gt;
* Zeitraffer/Zeitlupe&lt;br /&gt;
&lt;br /&gt;
Beispiel doppelte Geschwindigkeit:&lt;br /&gt;
 ffmpeg -i input.mp4 -filter:v &amp;quot;setpts=0.5*PTS&amp;quot; output.mp4&lt;br /&gt;
(Der Sound sollte dann rausgeschnitten werden, weil er sonst nicht zur Filmlänge passt, s.o.)&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@sekhar.rahul/creating-a-time-lapse-video-on-the-command-line-with-ffmpeg-1a7566caf877&lt;br /&gt;
&lt;br /&gt;
* Untertitel zufügen&lt;br /&gt;
&lt;br /&gt;
Wenn die Untertitel als srt Datei vorliegen , muss man sie erst ins ass Format umwandeln:&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -i subtitles.srt subtitles.ass&lt;br /&gt;
&lt;br /&gt;
[[Subtitles_example.srt | Hier]] gibt es ein Beispiel für ein .srt file im Format&lt;br /&gt;
 &amp;lt;fortlaufende nr.&amp;gt;&lt;br /&gt;
 &amp;lt;time range HH:MM:SS,mmm --&amp;gt; HH:MM:SS,mmm&amp;gt;&lt;br /&gt;
 &amp;lt;text&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Video mit Untertiteln erzeugen:&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -i input.mp4 -vf ass=subtitles.ass output.mp4&lt;br /&gt;
&lt;br /&gt;
Wer immer noch nicht genug hat:&lt;br /&gt;
&lt;br /&gt;
https://img.ly/blog/ultimate-guide-to-ffmpeg/&lt;br /&gt;
&lt;br /&gt;
==Soundtest mit arecord ==&lt;br /&gt;
Manchmal ist es hilfreich, Headset und Lautsprecher zu testen. So geht das:&lt;br /&gt;
&lt;br /&gt;
 $ arecord -f S16_LE -r 3000 | aplay -vvv&lt;br /&gt;
&lt;br /&gt;
== avidemux ==&lt;br /&gt;
Nettes Videoschnitttool. Leider nicht mehr als Debianpaket verfügbar. Kann aber hier heruntergeladen werden:&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.fosshub.com/Avidemux.html&lt;br /&gt;
&lt;br /&gt;
Am besten das Appimage herunterladen und verlinken z.B.&lt;br /&gt;
 # ln -s /usr/local/bin/3rdparty/avidemux_2.8.1.appImage /usr/local/bin/avidemux&lt;br /&gt;
&lt;br /&gt;
== Bildbetrachter ==&lt;br /&gt;
&lt;br /&gt;
=== irfanview ===&lt;br /&gt;
https://www.irfanview.net/faq.htm&lt;br /&gt;
&lt;br /&gt;
Gibt es generisch nur für Windows. Lässt sich aber auf (mindestens) 2 Arten auch für Linux installieren:&lt;br /&gt;
&lt;br /&gt;
==== 1. Snap ====&lt;br /&gt;
https://snapcraft.io/install/irfanview/debian&lt;br /&gt;
 $ sudo apt install snapd&lt;br /&gt;
 $ sudo snap install irfanview&lt;br /&gt;
 $ type -a irfanview&lt;br /&gt;
&amp;lt;i&amp;gt;irfanview is /snap/bin/irfanview&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== 2. Wine ====&lt;br /&gt;
Unbedingt darauf achten, die 64bit-Version herunterzuladen!&lt;br /&gt;
 $ wine iview460_x64_setup.exe&lt;br /&gt;
Danach kommt ein Windows-Installer-Fenster.&lt;br /&gt;
 $ type -a irfanview&lt;br /&gt;
&amp;lt;i&amp;gt;irfanview is aliased to `wine /home/kathrin/.wine/drive_c/Program\ Files/IrfanView/i_view64.exe&#039;&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== gwenview ===&lt;br /&gt;
Hat ein paar nette Eigenschaften, z.B. mehrere Bilder gleichzeitig anzeigen lassen zum Vergleich. Kommt mit KDE oder als gleichnamiges Paket.&lt;br /&gt;
&lt;br /&gt;
== .wav Dateien mit xine abspielen ==&lt;br /&gt;
Wenn eine Fehlermeldung missing plugin kommt...&lt;br /&gt;
&lt;br /&gt;
* libxine2-misc-plugins installieren&lt;br /&gt;
&lt;br /&gt;
==Image::Magick==&lt;br /&gt;
Perlmodul, mit dem man Massenbearbeitung von Bildern (verkleinern, vergrößern,&lt;br /&gt;
...) durchführen kann.&amp;lt;br&amp;gt;&lt;br /&gt;
Beispielscript  [[resizeXpercent.pl]]: &amp;lt;br&amp;gt;&lt;br /&gt;
Alle Bilder eines bestimmten Ordners werden um X Prozent verkleinert und in&lt;br /&gt;
einen anderen Ordner gespeichert.&lt;br /&gt;
&lt;br /&gt;
==Image Magick mit jpeg-Support benutzen==&lt;br /&gt;
libjpeg-dev installieren&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Main_Page&amp;diff=656</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Main_Page&amp;diff=656"/>
		<updated>2025-11-23T09:25:38Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font color=&amp;quot;#ffa500&amp;quot; family=&amp;quot;comic sans ms,arial,verdana&amp;quot; size=&amp;quot;6px&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sammlung von Linux-Topics&#039;&#039;&#039; &amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Pingo.png|120px|right|Tux]]&lt;br /&gt;
Dies ist eine - zugegebenermaßen ziemlich unsortierte - Sammlung von&lt;br /&gt;
praktischen Tools und HowTos für Linux zum Zweck einer&lt;br /&gt;
Online-Gedächtnisstütze und Einstiegshilfe für Anfänger.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Wer schon etwas weiter fortgeschritten ist, kann sich die [[Advanced Topics]] zu Gemüte führen.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Getestet und erstellt wurden diese Mini-HowTos - sofern nicht anders&lt;br /&gt;
angegeben -&lt;br /&gt;
unter Debian [http://www.debian.org], Ubuntu [http://www.ubuntu.com]  und&lt;br /&gt;
Knoppix [http://www.knoppix.org].&lt;br /&gt;
&lt;br /&gt;
* [[Dateien verschlüsseln mit gpg]]&lt;br /&gt;
* [[Debianpakete installieren]]&lt;br /&gt;
* [[Digicam einbinden]]&lt;br /&gt;
* [[Drucker einrichten mit CUPS]]&lt;br /&gt;
* [[Festplatte hinzufügen]]&lt;br /&gt;
* [[Filesystem verschlüsseln]]&lt;br /&gt;
* [[GPM]]&lt;br /&gt;
* [[Grub]]&lt;br /&gt;
* [[Interfacenamen umbiegen mit udev]]&lt;br /&gt;
* [[iptables-Beispiel]]&lt;br /&gt;
* [[KDE-Shortcuts]]&lt;br /&gt;
* [[Misc (multimedia)]]&lt;br /&gt;
* [[PDF Bearbeitung]]&lt;br /&gt;
* [[Rechnen auf der Shell]]&lt;br /&gt;
* [[rpm]]&lt;br /&gt;
* [[tar/gzip]]&lt;br /&gt;
* [[VirtualBox installieren]]&lt;br /&gt;
* [[VMware-Tools installieren]]&lt;br /&gt;
* [[weitere Links]]&lt;br /&gt;
&lt;br /&gt;
[[Advanced Topics]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Legacy Topics&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Archiv von Themen, die nicht mehr so aktuell sind:&lt;br /&gt;
&lt;br /&gt;
* [[32 to 64 bit Migration]]&lt;br /&gt;
* [[ Browser-Anpassungen ]]&lt;br /&gt;
* [[Kernel kompilieren]]&lt;br /&gt;
* [[MAC OS - CD unter Linux kopieren]]&lt;br /&gt;
* [[Mandriva - Software installieren]]&lt;br /&gt;
* [[mtools]]&lt;br /&gt;
* [[WLAN einrichten]]&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Misc_(multimedia)&amp;diff=655</id>
		<title>Misc (multimedia)</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Misc_(multimedia)&amp;diff=655"/>
		<updated>2025-11-01T19:17:06Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* ffmpeg */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Fotos vom Smartphone übertragen ==&lt;br /&gt;
* Geht schnell und stressfrei mit &#039;&#039;&#039;jmtpfs&#039;&#039;&#039;. Dieses lässt sich als Paket installieren.&amp;lt;br&amp;gt;&lt;br /&gt;
* Das Smartphone an den Micro-USB-Port anschließen  und die Art der Datenübertragung wählen:&lt;br /&gt;
 &amp;quot;Daten übertragen (mtp)&amp;quot;&lt;br /&gt;
* Daten z.B. nach /mnt mounten:&lt;br /&gt;
 jmtpfs /mnt&lt;br /&gt;
und von dort kopieren&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Sollte unter KDE folgender Fehler auftreten&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Listing raw device(s)&lt;br /&gt;
Device 0 (VID=...) is a Samsung Galaxy models (MTP).&lt;br /&gt;
Found 1 device(s):&lt;br /&gt;
Samsung: Galaxy models (MTP) (...) @ bus 1, dev 10&lt;br /&gt;
Attempting to connect device(s)&lt;br /&gt;
libusb_claim_interface() reports device is busy, likely in use by GVFS or KDE MTP device handling already&lt;br /&gt;
LIBMTP PANIC: Unable to initialize device&lt;br /&gt;
Unable to open raw device 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Kann man folgendes versuchen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 killall kiod5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
bzw.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
killall gvfs-udisk2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
unter xfce.&lt;br /&gt;
&lt;br /&gt;
Leider ist es unter xfce offenbar nicht mehr möglich mit o.g. Tools ein Smartphone zu mounten. Eine andere Möglichkeit ist der Android Debugger &#039;&#039;&#039;adb&#039;&#039;&#039;.&amp;lt;br&amp;gt;&lt;br /&gt;
Installieren und sich den Inhalt der Speicherkarte ansehen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo apt install adb&lt;br /&gt;
adb devices&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Es sollte mindestens ein Device angezeigt werden)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb shell&lt;br /&gt;
ls /sdcard/DCIM&lt;br /&gt;
strg+d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Außerdem muss man an seinem Android Gerät noch ein paar Einstellungen vornehmen:&lt;br /&gt;
* Developer Mode einschalten (7x auf die Serial tippen)&lt;br /&gt;
* USB Debugging einschalten (unter den Developer Options)&lt;br /&gt;
* Nach dem Einstecken des USB-Kabels Data Transfer wählen (anstatt nur Aufladen)&lt;br /&gt;
&lt;br /&gt;
Eine ausführliche Anleitung für diese Schritte gibt es hier:&amp;lt;br&amp;gt;&lt;br /&gt;
https://droidwin.com/fix-adb-device-not-found-error/&lt;br /&gt;
&lt;br /&gt;
Mit &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb pull &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
lassen sich die Dateien nun herunterziehen.&lt;br /&gt;
&lt;br /&gt;
Man kann der adb Shell auch Argumente mitgeben z.B.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb shell ls /sdcard/DCIM/Camera&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Annotations ===&lt;br /&gt;
* Unter xfce4 kann man Fotos am besten mit jmtpfs herunterladen. Davor muss man konkurierende Prozesse mit Zugriff auf die Kamera in einer Endlosschleife abschießen&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
while sleep 0.1; do pkill kiod5; pkill gvfs-udisks2; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
und in einer anderen Shell das Device mounten:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /tmp/cam&lt;br /&gt;
jmtpfs /tmp/cam&lt;br /&gt;
Device 0 (VID=18d1 and PID=4ee2) is a Google Inc Nexus/Pixel (MTP+ADB).&lt;br /&gt;
Android device detected, assigning default bug flags&lt;br /&gt;
ls /tmp/cam/Interner\ gemeinsamer\ Speicher/DCIM/&lt;br /&gt;
Camera  PeakLens&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Unter kde kann man sich mir dem adb (s.o.) behelfen, indem man kiod+gvfs in einer Endlosschleife abschießt (s.o.):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 while sleep 0.1; do pkill kiod5; pkill gvfs-udisks2; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Zu viele Fotos auf einmal herunterladen, bricht irgendwann ab. Hier ein paar Commands, wie man eine Auswahl heruntergeladen bekommt (e.g. Jan. 2024):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 adb shell ls /sdcard/DCIM/Camera|grep IMG_2024 &amp;gt;/tmp/img.txt &lt;br /&gt;
 for IMG in $(grep 202401 /tmp/img.txt) ; do adb pull /sdcard/DCIM/Camera/$IMG CAM/; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Bootbaren USB-Stick erstellen ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Voraussetzung:&#039;&#039;&#039; Ein Ordner namens &#039;&#039;ISO&#039;&#039;, in dem sich bootbare Dateien (z.B. für ein Windows-Image) befinden&lt;br /&gt;
&lt;br /&gt;
Ein ISO-Image lässt sich sehr einfach mit dem Tool &#039;&#039;&#039;xorrisofs&#039;&#039;&#039; erstellen.&lt;br /&gt;
&amp;lt;br&amp;gt;Zunächst muss dieses installiert werden&lt;br /&gt;
 $ sudo apt install xorriso&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das Erzeugen des ISO geht dann wie folgt:&lt;br /&gt;
 $ xorrisofs -r -J -o ./windows10.iso ./ISO&lt;br /&gt;
&lt;br /&gt;
Das ISO-Image kann man dann mit &amp;quot;dd&amp;quot; auf den USB-Stick kopiert werden. Je nachdem, um welches Device es sich handelt, kann das dann so aussehen:&lt;br /&gt;
 # dd if=windows.iso of=/dev/sde&lt;br /&gt;
Achtung: &#039;&#039;&#039;Keine&#039;&#039;&#039; Partition erstellen/angeben !&lt;br /&gt;
&lt;br /&gt;
==Tracks aus Musik-CD extrahieren==&lt;br /&gt;
&lt;br /&gt;
 $  icedax -D &amp;lt;devicename&amp;gt; -t 1 song1.wav&lt;br /&gt;
also z.B.&lt;br /&gt;
 $ icedax -D /dev/cdrom -t 2 song2.wav&lt;br /&gt;
 $ icedax -D /dev/cdrom0 -t 3+4 mixed.wav&lt;br /&gt;
(macht aus Nr. 3 u. 4 eine große Datei)&lt;br /&gt;
&lt;br /&gt;
Wer lieber mp3s hat, kann &#039;&#039;&#039;cdda2mp3&#039;&#039;&#039; verwenden.&lt;br /&gt;
&lt;br /&gt;
==Musik-CD brennen==&lt;br /&gt;
Für eine Musik-CD wird KEIN Isoimge erzeugt!&amp;lt;br&amp;gt;&lt;br /&gt;
Stattdessen alle wav-Dateien in einem Ordner speichen, dort hineinwechseln und&lt;br /&gt;
ausführen:&lt;br /&gt;
 # cdrecord -dao *.wav&lt;br /&gt;
&lt;br /&gt;
cdrecord wird in neueren Debian-Releases durch &#039;&#039;wodim&#039;&#039; ersetzt. Beispiel:&lt;br /&gt;
&lt;br /&gt;
 $ /usr/bin/wodim -v gracetime=2 dev=/dev/sr0 speed=4 -sao driveropts=burnfree -data&lt;br /&gt;
&lt;br /&gt;
==DVD brennen==&lt;br /&gt;
 # growisofs -R -J -Z /dev/dvd /path/to/file-or-directory&lt;br /&gt;
&lt;br /&gt;
wobei -R=Rockridge, -J=Joliet und -Z=device (kann von System zu System variieren). Wer sich nicht sicher ist, kann mit -dry-run den Brennvorgang erstmal simulieren.&amp;lt;br&amp;gt;&lt;br /&gt;
Eine sehr schöne Übersicht über DVDs allg. und unter Linux gibt es hier&amp;lt;br&amp;gt;&lt;br /&gt;
http://www.rakekniven.de/linux/k-base/dvd-brennen.php&lt;br /&gt;
&lt;br /&gt;
== Von DVD extrahieren ==&lt;br /&gt;
Als brauchbar für die Erstellung eines mp4 von DVD hat sich im Test handbrake herausgestellt. Lässt sich als Paket installieren. Nur der Sound kommt etwas leise. Dieser lässt sich mit [[#ffmpeg|ffmpeg]] aufbessern (s.u.).&lt;br /&gt;
&lt;br /&gt;
== Monitor kalibrieren ==&lt;br /&gt;
&lt;br /&gt;
Sehr empfehlenswert ist das Tool DisplayCAL [https://displaycal.net/], das gängige Kalibrierungsgeräte wie z.B. Spyder, unterstützt.&amp;lt;br&amp;gt;&lt;br /&gt;
Verwendete Hardware:&amp;lt;br&amp;gt;&lt;br /&gt;
[https://www.dpreview.com/articles/3856869836/spyder3 Spyder 3]&lt;br /&gt;
&lt;br /&gt;
Hier gibt es eine umfassende Anleitung:&lt;br /&gt;
&lt;br /&gt;
https://www.reallinuxuser.com/how-to-color-calibrate-your-monitor-in-linux/&lt;br /&gt;
&lt;br /&gt;
=== Anmerkung ===&lt;br /&gt;
&lt;br /&gt;
Die Originalversion benutzt Python2 und wird daher auf den gängigen Linux-Distributionen nicht mehr unterstützt. Es gibt aber ein Nachfolgeprojekt: https://github.com/eoyilmaz/displaycal-py3&lt;br /&gt;
und darauf basierend ein Debianpaket:&lt;br /&gt;
 $ sudo apt install displaycal&lt;br /&gt;
&lt;br /&gt;
Unten genanntes wurde mit der alten Version und python2 getestet.&lt;br /&gt;
&lt;br /&gt;
=== Protipp ===&lt;br /&gt;
&lt;br /&gt;
* Wer keine guten/farbstichige Ergebnisse erzielt, kann als Whitepoint 5300K versuchen.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot 2021-02-23 19-55-44.png|800px]]&lt;br /&gt;
&lt;br /&gt;
* Am Ende wird ein &#039;&#039;&#039;Profil&#039;&#039;&#039; (.icc Datei) erzeugt. Laden lässt sich dieses mit xcalib:&lt;br /&gt;
&lt;br /&gt;
 $ xcalib &amp;lt;profilname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(und ein bisschen Theorie zu den konfigurierbaren Parametern:&lt;br /&gt;
&lt;br /&gt;
https://displaycal.net/#colorimeter-corrections )&lt;br /&gt;
&lt;br /&gt;
== ffmpeg ==&lt;br /&gt;
&#039;&#039;&#039;Das&#039;&#039;&#039; Tool zum Filme (und Sound) bearbeiten. Hier ein paar Beispiele:&lt;br /&gt;
&lt;br /&gt;
* Film um 180° drehen:&lt;br /&gt;
 $ ffmpeg -i input.mp4 -vf &amp;quot;transpose=2,transpose=2&amp;quot; output.mp4&lt;br /&gt;
&lt;br /&gt;
* Sound entfernen:&lt;br /&gt;
 $ ffmpeg -i MVI_1747.MOV -vcodec copy -an MVI_1747_mute.MOV&lt;br /&gt;
* Sound extrahieren (als mp3 abspeichern):&lt;br /&gt;
 # ffmpeg -i MVI_1751.MOV MVI_1751.mp3&lt;br /&gt;
* Lautstärke verändern (z.B. 1.5fach)&lt;br /&gt;
 # ffmpeg -i input.wav -af &amp;quot;volume=1.5&amp;quot; output.wav&lt;br /&gt;
&lt;br /&gt;
* Ab Position n den Rest abschneiden:&lt;br /&gt;
 #  ffmpeg -i MVI_1752.mp3 -to 00:00:27  MVI_1752_short.mp3&lt;br /&gt;
&lt;br /&gt;
* Ausschnitt rausschneiden und gleichzeitig als mp3 abspeichern&lt;br /&gt;
 # ffmpeg -i Buena_Vista_Social_Club.mp4 -ss 00:04:16 -to 00:09:16 bvsc2.mp3&lt;br /&gt;
&lt;br /&gt;
* Audio zu Video zufügen:&lt;br /&gt;
 # ffmpeg -i MVI_1747_mute.MOV -i MVI_1752a.mp3 -codec copy -shortest MVI_1747_sound.MOV&lt;br /&gt;
&lt;br /&gt;
* 2 Sounddateien mergen:&lt;br /&gt;
 #  ffmpeg -i &amp;quot;concat:MVI_1751.mp3|MVI_1753.mp3&amp;quot; -c copy MVI_1752.mp3&lt;br /&gt;
(Bei größeren Dateimengen ist sox eine gute Alternative:&amp;lt;br&amp;gt;&lt;br /&gt;
 # sox file1.wav file2.wav ... filen.wav newfile.wav&lt;br /&gt;
)&lt;br /&gt;
* 2 Videodateien mergen&lt;br /&gt;
Falls auf Grund unterschiedlicher Codierung sich das obige Beispiel nicht aus Videos übertragen lässt:&lt;br /&gt;
 $ ffmpeg -ifile1.mp4 -i file2.mp4 -filter_complex &amp;quot;[0:v][0:a][1:v][1:a] concat=n=2:v=1:a=1 [outv] [outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; out.mp4&lt;br /&gt;
&lt;br /&gt;
Beispiel für das Zusammenmergen von 3 Videos incl. Sound:&lt;br /&gt;
 $ ffmpeg -i VID_20241225_part1.mp4 -i VID_20241225_part2.mp4 -i VID_20241225_142603~2.mp4 -filter_complex &amp;quot;[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0] concat=n=3 :v=1:a=1[outv][outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; output.mp4&lt;br /&gt;
&lt;br /&gt;
Beispiel für das Zusammenmergen von 4 Videos zu 1 großen:&lt;br /&gt;
 $ ffmpeg -i part1.mp4 -i part2.mp4 -i MVI_0309.MP4 -i MVI_0310.MP4 -filter_complex &amp;quot;[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0][3:v:0][3:a:0]concat=n=4:v=1:a=1[outv][outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; bigmovie.mp4&lt;br /&gt;
&lt;br /&gt;
Alternative:&amp;lt;br&amp;gt;&lt;br /&gt;
Datei erstellen mit dem Inhalt der Videodateien:&lt;br /&gt;
 cat &amp;gt; myfile.txt&amp;lt;&amp;lt;EOF&lt;br /&gt;
 file &#039;MVI_1547.mp4&#039;&lt;br /&gt;
 file &#039;MVI_1610.mp4&#039;&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -f concat -i myfile.txt -c copy output.mp4&lt;br /&gt;
&lt;br /&gt;
Sollte das Video nicht sauber durchlaufen, mit avidemux öffnen und die Schnittstelle rausschneiden.&lt;br /&gt;
&lt;br /&gt;
(wer noch immer nicht genug hat: https://trac.ffmpeg.org/wiki/Concatenate )&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Beispiel für 2 Videodateien zusammenmergen:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 F1=MVI_1869.MOV &lt;br /&gt;
 F2=MVI_1870.MOV&lt;br /&gt;
 RESULT=MVI_1871.mp4&lt;br /&gt;
 &lt;br /&gt;
 # sound&lt;br /&gt;
 #ffmpeg -i $F1 intermediate.mp3&lt;br /&gt;
 #ffmpeg -i $F2 intermediate.mp3&lt;br /&gt;
 &lt;br /&gt;
 # movie ohne sound&lt;br /&gt;
 # ffmpeg -i $F1 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts&lt;br /&gt;
 # ffmpeg -i $F2 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts&lt;br /&gt;
 # ffmpeg -f mpegts -i &amp;quot;concat:intermediate1.ts|intermediate2.ts&amp;quot; -c copy -bsf:a aac_adtstoasc $RESULT &lt;br /&gt;
 # rm intermediate{1,2}.ts&lt;br /&gt;
 &lt;br /&gt;
 # movie mit Sound&lt;br /&gt;
 ffmpeg -i $F1 -i $F2 \&lt;br /&gt;
  -filter_complex &amp;quot;[0:v:0] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [v] [a]&amp;quot; \&lt;br /&gt;
  -map &amp;quot;[v]&amp;quot; -map &amp;quot;[a]&amp;quot; $RESULT&lt;br /&gt;
&lt;br /&gt;
* Zeitraffer/Zeitlupe&lt;br /&gt;
&lt;br /&gt;
Beispiel doppelte Geschwindigkeit:&lt;br /&gt;
 ffmpeg -i input.mp4 -filter:v &amp;quot;setpts=0.5*PTS&amp;quot; output.mp4&lt;br /&gt;
(Der Sound sollte dann rausgeschnitten werden, weil er sonst nicht zur Filmlänge passt, s.o.)&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@sekhar.rahul/creating-a-time-lapse-video-on-the-command-line-with-ffmpeg-1a7566caf877&lt;br /&gt;
&lt;br /&gt;
* Untertitel zufügen&lt;br /&gt;
&lt;br /&gt;
Wenn die Untertitel als srt Datei vorliegen , muss man sie erst ins ass Format umwandeln:&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -i subtitles.srt subtitles.ass&lt;br /&gt;
&lt;br /&gt;
[[Subtitles_example.srt | Hier]] gibt es ein Beispiel für ein .srt file im Format&lt;br /&gt;
 &amp;lt;fortlaufende nr.&amp;gt;&lt;br /&gt;
 &amp;lt;time range HH:MM:SS,mmm --&amp;gt; HH:MM:SS,mmm&amp;gt;&lt;br /&gt;
 &amp;lt;text&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Video mit Untertiteln erzeugen:&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -i input.mp4 -vf ass=subtitles.ass output.mp4&lt;br /&gt;
&lt;br /&gt;
Wer immer noch nicht genug hat:&lt;br /&gt;
&lt;br /&gt;
https://img.ly/blog/ultimate-guide-to-ffmpeg/&lt;br /&gt;
&lt;br /&gt;
==Soundtest mit arecord ==&lt;br /&gt;
Manchmal ist es hilfreich, Headset und Lautsprecher zu testen. So geht das:&lt;br /&gt;
&lt;br /&gt;
 $ arecord -f S16_LE -r 3000 | aplay -vvv&lt;br /&gt;
&lt;br /&gt;
== avidemux ==&lt;br /&gt;
Nettes Videoschnitttool. Leider nicht mehr als Debianpaket verfügbar. Kann aber hier heruntergeladen werden:&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.fosshub.com/Avidemux.html&lt;br /&gt;
&lt;br /&gt;
Am besten das Appimage herunterladen und verlinken z.B.&lt;br /&gt;
 # ln -s /usr/local/bin/3rdparty/avidemux_2.8.1.appImage /usr/local/bin/avidemux&lt;br /&gt;
&lt;br /&gt;
== Bildbetrachter ==&lt;br /&gt;
&lt;br /&gt;
=== irfanview ===&lt;br /&gt;
https://www.irfanview.net/faq.htm&lt;br /&gt;
&lt;br /&gt;
Gibt es generisch nur für Windows. Lässt sich aber auf (mindestens) 2 Arten auch für Linux installieren:&lt;br /&gt;
&lt;br /&gt;
==== 1. Snap ====&lt;br /&gt;
https://snapcraft.io/install/irfanview/debian&lt;br /&gt;
 $ sudo apt install snapd&lt;br /&gt;
 $ sudo snap install irfanview&lt;br /&gt;
 $ type -a irfanview&lt;br /&gt;
&amp;lt;i&amp;gt;irfanview is /snap/bin/irfanview&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== 2. Wine ====&lt;br /&gt;
Unbedingt darauf achten, die 64bit-Version herunterzuladen!&lt;br /&gt;
 $ wine iview460_x64_setup.exe&lt;br /&gt;
Danach kommt ein Windows-Installer-Fenster.&lt;br /&gt;
 $ type -a irfanview&lt;br /&gt;
&amp;lt;i&amp;gt;irfanview is aliased to `wine /home/kathrin/.wine/drive_c/Program\ Files/IrfanView/i_view64.exe&#039;&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== gwenview ===&lt;br /&gt;
Hat ein paar nette Eigenschaften, z.B. mehrere Bilder gleichzeitig anzeigen lassen zum Vergleich. Kommt mit KDE oder als gleichnamiges Paket.&lt;br /&gt;
&lt;br /&gt;
== .wav Dateien mit xine abspielen ==&lt;br /&gt;
Wenn eine Fehlermeldung missing plugin kommt...&lt;br /&gt;
&lt;br /&gt;
* libxine2-misc-plugins installieren&lt;br /&gt;
&lt;br /&gt;
==Image::Magick==&lt;br /&gt;
Perlmodul, mit dem man Massenbearbeitung von Bildern (verkleinern, vergrößern,&lt;br /&gt;
...) durchführen kann.&amp;lt;br&amp;gt;&lt;br /&gt;
Beispielscript  [[resizeXpercent.pl]]: &amp;lt;br&amp;gt;&lt;br /&gt;
Alle Bilder eines bestimmten Ordners werden um X Prozent verkleinert und in&lt;br /&gt;
einen anderen Ordner gespeichert.&lt;br /&gt;
&lt;br /&gt;
==Image Magick mit jpeg-Support benutzen==&lt;br /&gt;
libjpeg-dev installieren&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=654</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=654"/>
		<updated>2025-09-26T15:07:28Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Image handling */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Update images on running deployment&lt;br /&gt;
 oc set image deployment/mydb mariadb-80=docker.io/ubuntu18/mysql-80:1-228&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=653</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=653"/>
		<updated>2025-09-26T14:05:50Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Registries */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Image handling =&lt;br /&gt;
&lt;br /&gt;
Look into images:&lt;br /&gt;
 oc image info registry.redhat.io:8443/ubi8/httpd-24:1-209 (-o json | jq -r .digest)&lt;br /&gt;
&lt;br /&gt;
Watching images directly on a node&lt;br /&gt;
 crictl images&lt;br /&gt;
 crictl ps --name httpd-24 -o yaml&lt;br /&gt;
 crictl images --digests &amp;lt;shasum&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you have account to a registry:&lt;br /&gt;
&lt;br /&gt;
 skopeo login &amp;lt;registry&amp;gt;:8443 -u &amp;lt;username&amp;gt;&lt;br /&gt;
 skopeo inspect docker://registry.redhat.io:8443/ubi8/httpd-24:1-209&lt;br /&gt;
Add the &amp;quot;latest&amp;quot; tag to a dedicated image: &lt;br /&gt;
 skopeo copy docker://registry.redhat.io:8443/ubi8/httpd-24:1-215 docker://registry.redhat.io:8443/ubi8/httpd-24:latest&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=PDF_Bearbeitung&amp;diff=652</id>
		<title>PDF Bearbeitung</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=PDF_Bearbeitung&amp;diff=652"/>
		<updated>2025-09-18T19:02:41Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: Created page with &amp;quot;= PDFs bearbeiten =  Ein sehr nützliches Tool hierfür ist pdftk  == PDF splitten ==   pdftk &amp;lt;input.pdf&amp;gt; cat &amp;lt;range&amp;gt; output&amp;lt;output.pdf&amp;gt;  Beispiele  pdftk document.pdf cat 1-2 output first2pages.pdf  pdftk document.pdf cat 3 output thirdpage.pdf  == PDFs mergen ==  pdftk document1.pdf document2.pdf cat output document_merged.pdf&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= PDFs bearbeiten =&lt;br /&gt;
&lt;br /&gt;
Ein sehr nützliches Tool hierfür ist pdftk&lt;br /&gt;
&lt;br /&gt;
== PDF splitten ==&lt;br /&gt;
&lt;br /&gt;
 pdftk &amp;lt;input.pdf&amp;gt; cat &amp;lt;range&amp;gt; output&amp;lt;output.pdf&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Beispiele&lt;br /&gt;
 pdftk document.pdf cat 1-2 output first2pages.pdf&lt;br /&gt;
 pdftk document.pdf cat 3 output thirdpage.pdf&lt;br /&gt;
&lt;br /&gt;
== PDFs mergen ==&lt;br /&gt;
 pdftk document1.pdf document2.pdf cat output document_merged.pdf&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=651</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=651"/>
		<updated>2025-09-17T15:22:57Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Change resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Add probes ===&lt;br /&gt;
Configure readiness probe for deployment:&lt;br /&gt;
 $ oc set probe deployment/&amp;lt;testdeploy&amp;gt; --readiness --failure-threshold 7 --get-url http://:3000/api/health&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=650</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=650"/>
		<updated>2025-09-17T11:50:46Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Scaling resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Autoscale Pods&lt;br /&gt;
 $ oc autoscale deployment/test --min 2 --max 10 --cpu-percent 80&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=649</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=649"/>
		<updated>2025-09-16T06:23:56Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Reduce memory requests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Reduce/Upgrade cpu/mem requests ===&lt;br /&gt;
Reduce memory requests:&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=648</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=648"/>
		<updated>2025-09-15T19:07:11Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Change resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
== Reduce memory requests ==&lt;br /&gt;
 $ oc set resources deployment/huge-mem --requests memory=250Mi&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=647</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=647"/>
		<updated>2025-08-27T12:43:22Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=646</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=646"/>
		<updated>2025-08-25T17:38:54Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* pvc */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt; /var/lib/mysql&amp;gt; (--claim-class &amp;lt;storage class&amp;gt; --claim-mode RWX|RWO --claim-size 1G )&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=645</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=645"/>
		<updated>2025-08-25T17:35:46Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Add volume to deployment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt;//var/lib/mysql&amp;gt; (--claim-class ... --claim-mode RWX|RWO --claim-size 1G)&lt;br /&gt;
&lt;br /&gt;
== Remove volume from deployment ==&lt;br /&gt;
&lt;br /&gt;
 $ oc set volume deployment/file-sharing --remove --name=&amp;lt;vol-name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Drucker_einrichten_mit_CUPS&amp;diff=644</id>
		<title>Drucker einrichten mit CUPS</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Drucker_einrichten_mit_CUPS&amp;diff=644"/>
		<updated>2025-08-18T15:22:20Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Alternative KDE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ausgangssituation=&lt;br /&gt;
OS: Debian &amp;gt;= buster&amp;lt;br&amp;gt;&lt;br /&gt;
Drucker: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;Canon Pixma iP4300&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;HP Deskjet 1000&amp;lt;br&amp;gt;&lt;br /&gt;
Server: Cups&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Arbeitsschritte =&lt;br /&gt;
== Im Vorfeld ==&lt;br /&gt;
Bevor man sich einen Drucker zulegt, will man vielleicht unter&lt;br /&gt;
http://www.openprinting.org/printers&lt;br /&gt;
überprüfen, ob das gute Stück auch von Linux unterstützt wird.&lt;br /&gt;
==  Treiber holen ==&lt;br /&gt;
Canon bietet dankenswerterweise generische Treiber: &amp;lt;br&amp;gt;&lt;br /&gt;
http://de.software.canon-europe.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
tar.gz runterladen und auspacken.&lt;br /&gt;
&lt;br /&gt;
Für hp gibt es unter der Sammelbezeichnung &#039;&#039;&#039;hplip&#039;&#039;&#039; eine große Menge Treiber für alle gängigen Modelle&lt;br /&gt;
&lt;br /&gt;
http://sourceforge.net/projects/hplip/files/hplip/&lt;br /&gt;
&lt;br /&gt;
== Treiber installieren ==&lt;br /&gt;
=== Canon===&lt;br /&gt;
Da es nur rpms gibt, müssen die Treiber erst in Debianpakete verwandelt werden.&lt;br /&gt;
Dafür gibt es das praktische Tool &#039;alien&#039;:&lt;br /&gt;
 # alien --script cnijfilter-common-2.70-2.i386.rpm&lt;br /&gt;
 # alien --script cnijfilter-ip4300-2.70-2.i386.rpm&lt;br /&gt;
Nun die Treiber nach altbekannter Manier installieren:&lt;br /&gt;
 # dpkg -i cnijfilter-common-2.70-3.i386.deb&lt;br /&gt;
 # dpkg -i cnijfilter-ip4300-2.70-3.i386.deb&lt;br /&gt;
Cups (bzw. das Paket &#039;&#039;&#039;cups&#039;&#039;&#039;) installieren und Daemon starten. Cups lauscht per&lt;br /&gt;
default auf Port 631, also im Browser&lt;br /&gt;
 http://localhost:631&lt;br /&gt;
eingeben. Unter &amp;quot;Administration -&amp;gt; add printer&amp;quot; bekommt man eine Auswahlmaske (s.u.)&lt;br /&gt;
&lt;br /&gt;
===HP===&lt;br /&gt;
Mittlerweile gibt es für hp das Debianpaket &#039;&#039;&#039;hplip&#039;&#039;&#039;. Dort sich alle gängigen Modelle enthalten.&lt;br /&gt;
&lt;br /&gt;
 # apt install hplip&lt;br /&gt;
&lt;br /&gt;
(Wer sich im o.g. Link die neueste Version heruntergeladen hat, muss diese mittels&lt;br /&gt;
 hplip-&amp;lt;version&amp;gt;.run installieren.)&lt;br /&gt;
&lt;br /&gt;
Falls im laufenden Betrieb Schwierigkeiten entstehen sollten, sollte man überprüfen, ob folgende Pakete vorhanden sind.&lt;br /&gt;
* libcups2-dev&lt;br /&gt;
* libcupsimage2-dev&lt;br /&gt;
&lt;br /&gt;
Für Debugging / manuelle Einstellungen gibt es eine Reihe hp-* commands, auf die hier aber nicht näher eingegangen wird, da wir den Drucker direkt  im CUPS einrichten.&lt;br /&gt;
&lt;br /&gt;
Für das Auswählen von weiteren Optionen (scan, fax,...) sollten auch folgende Pakete installiert sein:&lt;br /&gt;
* xsane&lt;br /&gt;
* libsane-dev&lt;br /&gt;
* python-imaging&lt;br /&gt;
* python3-pyqt4&lt;br /&gt;
* python-notify&lt;br /&gt;
&lt;br /&gt;
Drucker im CUPS einrichten. Die Web-GUI lauscht auf Port 631, also in Browser http://localhost:631 eingeben.&lt;br /&gt;
Unter &amp;quot;Administration&amp;quot;  Add Printer auswählen.&lt;br /&gt;
&lt;br /&gt;
==  Cups konfigurieren ==&lt;br /&gt;
 Name:&lt;br /&gt;
 Location:&lt;br /&gt;
 Description:&lt;br /&gt;
Hier ist es ziemlich egal, was man einträgt, ich würde aber keinen zu langen&lt;br /&gt;
Namen wählen.&lt;br /&gt;
 Continue -&amp;gt; Device:&lt;br /&gt;
Hier sollte im Drop-Down-Menü bereits der richtige Name erscheinen, in dem Fall&lt;br /&gt;
&amp;quot;Canon iP4300 USB #1 (Canon iP4300)&amp;quot; Ist das nicht der Fall, überprüfen, ob man&lt;br /&gt;
den richtigen Treiber erwischt hat!&lt;br /&gt;
 Continue -&amp;gt; Model (oder Make):&lt;br /&gt;
Dort sollte &#039;Canon&#039;, &#039;HP&#039; usw. erscheinen&amp;lt;br&amp;gt;&lt;br /&gt;
Alternativ den Punkt &#039;Provide a PPD File&#039; wählen:&amp;lt;br&amp;gt;&lt;br /&gt;
Diese Dateien befinden sich bei Canon unter /usr/share/cups/model&lt;br /&gt;
&lt;br /&gt;
Wer den genauen Namen seines Druckers angeben will, kann diesen per USB anschliessen und mit&lt;br /&gt;
 lsusb&lt;br /&gt;
ermitteln.&lt;br /&gt;
&lt;br /&gt;
Fertig. Unter &amp;quot;Printers&amp;quot; kann man sich den Drucker anschauen u. gegebenenfalls&lt;br /&gt;
per &amp;quot;modify&amp;quot; Änderungen vornehmen. Der Punkt &amp;quot;Set default printer&amp;quot; ermöglicht&lt;br /&gt;
ein Drucken per &#039;lpr&#039; ohne Setzen einer -P-Option.&lt;br /&gt;
&#039;&#039;&#039;Achtung:&#039;&#039;&#039; Falls z.B. nach einem versuchten Probedruck folgende Meldung&lt;br /&gt;
erscheint:&lt;br /&gt;
 &amp;quot;Unable to start filter &amp;quot;pstocanonij&amp;quot; - No such file or directory.&amp;quot;&lt;br /&gt;
muss diese im Verzeichnis &lt;br /&gt;
/usr/lib/cups/filter hinterlegt werden. Danach den Drucker nochmal neu einrichten und sich mit&lt;br /&gt;
einer Testseite davon überzeugen, dass alles geklappt hat.&lt;br /&gt;
&lt;br /&gt;
Es besteht auch die Möglichkeit, das ppd File manuell nachzubearbeiten, z.B. Default Size von letter auf A4 umstellen.&lt;br /&gt;
Die vorhin von CUPS generierte Datei befindet sich unter&lt;br /&gt;
 /etc/cups/ppd/&lt;br /&gt;
&lt;br /&gt;
==Debugging==&lt;br /&gt;
Wenn nicht alles so läuft, wie es soll, kann man über die Web-GUI Debugging einschalten (Administration -&amp;gt; server -&amp;gt; Save debugging information for troubleshooting)&lt;br /&gt;
Dann nochmal drucken und in die Logfiles schauen. Nicht vergessen, diese Option später wieder rauszunehmen, da diese ein Vielfaches an Output erzeugt.&lt;br /&gt;
&lt;br /&gt;
==Known Bugs==&lt;br /&gt;
&lt;br /&gt;
=== Drucker stoppt selbständig ===&lt;br /&gt;
Manchmal &amp;quot;hängt&amp;quot; ein Drucker einfach bzw. liefert keinen Output. In der Übersicht im Webfrontend sieht man dann &lt;br /&gt;
 &#039;&#039;Paused - &amp;quot;File &amp;quot;/usr/lib/cups/filter/hpcups&amp;quot; not available: No such file or directory&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In diesem Fall geht man auf die Printer-Übersicht und klickt bei Maintenance den Punkt &amp;quot;Resume Printer&amp;quot; an.&lt;br /&gt;
&lt;br /&gt;
=== Fehlende Library ===&lt;br /&gt;
&lt;br /&gt;
Wenn im Debug Output (s.o.) das Fehlen einer Datei libpng.so.3 angemeckert wird oder ein Output erscheint wie&lt;br /&gt;
 &#039;&#039;/usr/local/bin/cifip4300: error while loading shared libraries: /usr/lib/libpng.so.3: invalid ELF header&#039;&#039;&lt;br /&gt;
muss diese Datei richtig verlinkt werden -&amp;gt; Mit dpkg ermitteln, welche libpng gerade installiert ist und den entsprechenden Link ins /lib-Verzeichnis setzen:&lt;br /&gt;
 # ln -s /lib/libpng12.so.0 libpng.so.3&lt;br /&gt;
&lt;br /&gt;
== Alternative KDE ==&lt;br /&gt;
Sobald man den Drcuker per USB angeschlossen hat, erscheint ein Pop-Up-Fenster, in dem man die Druckereinstellungen modifizieren kann falls nötig.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot kdeprinting.png|500px]]&lt;br /&gt;
&lt;br /&gt;
Wenn es später nochmal nötig wird, Veränderungen vorzunehmen, kann man dies mithilfe&lt;br /&gt;
 configure-printer &amp;lt;printername&amp;gt;&lt;br /&gt;
tun.&lt;br /&gt;
Den Druckernamen erfährt man per&lt;br /&gt;
 lsusb&lt;br /&gt;
Wer nur einen Drcuker angeschlossen hat, kann auch&lt;br /&gt;
 configure-printer default&lt;br /&gt;
versuchen.&lt;br /&gt;
&lt;br /&gt;
== Alternative Gutenprint ==&lt;br /&gt;
Falls die Treiber des Herstellers Probleme machen, kann man auch (den in cups enthaltenen) Gutenprint-Treiber verwenden. Dazu im Webfrontend (localhost:631) den Drucker anwählen und unter &amp;quot;Administration-&amp;gt;modify&amp;quot; das Modell des Herstellers + gutenprint wählen, also z.B. &amp;quot;Canon Pixma iP4300 - CUPS+Gutenprint v5.2.9&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Alternative Turpoprint ==&lt;br /&gt;
Dieses ist ein kommerzielles Programm für Linux-Druckertreiber. Kostenpunkt: ca. 30&lt;br /&gt;
€ (Stand März 2011): (http://www.turboprint.de/)&lt;br /&gt;
&lt;br /&gt;
Hier ist nur die Demoversion beschrieben. Diese hat den Nachteil, dass auf jedem&lt;br /&gt;
Ausdruck ein hässliches Turboprint-Label erscheint.&amp;lt;br&amp;gt;&lt;br /&gt;
turboprint herunterladen, auspacken, installieren (./setup-Befehl ausführen),&lt;br /&gt;
danach gibt es die Möglichkeit, eine Testseite zu drucken. &amp;lt;br&amp;gt;&lt;br /&gt;
In den Ordner /usr/share/turboprint/ppd wechseln, dort das zum Drucker passende&lt;br /&gt;
file auswählen und in den Cups-Ordner kopieren, also in meinem Fall:&lt;br /&gt;
 # cd /usr/share/turboprint/ppd&lt;br /&gt;
 # cp Canon_PIXMA_iP4300.ppd /usr/share/cups/model/&lt;br /&gt;
Dann im Cups ebendiese Datei auswählen&lt;br /&gt;
 -&amp;gt; Add printer&lt;br /&gt;
Das ist natürlich ein sehr spezifisches Beispiel, aber das Funktionsprinzip ist&lt;br /&gt;
ähnlich für fast alle Drucker.&lt;br /&gt;
&lt;br /&gt;
Zuguterletzt noch ein paar Tipps zum Finetuning des Druckers:&amp;lt;br&amp;gt;&lt;br /&gt;
http://wiki.ubuntuusers.de/Canon-Drucker&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=File:Screenshot_kdeprinting.png&amp;diff=643</id>
		<title>File:Screenshot kdeprinting.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=File:Screenshot_kdeprinting.png&amp;diff=643"/>
		<updated>2025-08-18T15:20:39Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Drucker_einrichten_mit_CUPS&amp;diff=642</id>
		<title>Drucker einrichten mit CUPS</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Drucker_einrichten_mit_CUPS&amp;diff=642"/>
		<updated>2025-08-18T15:19:51Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ausgangssituation=&lt;br /&gt;
OS: Debian &amp;gt;= buster&amp;lt;br&amp;gt;&lt;br /&gt;
Drucker: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;Canon Pixma iP4300&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;HP Deskjet 1000&amp;lt;br&amp;gt;&lt;br /&gt;
Server: Cups&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Arbeitsschritte =&lt;br /&gt;
== Im Vorfeld ==&lt;br /&gt;
Bevor man sich einen Drucker zulegt, will man vielleicht unter&lt;br /&gt;
http://www.openprinting.org/printers&lt;br /&gt;
überprüfen, ob das gute Stück auch von Linux unterstützt wird.&lt;br /&gt;
==  Treiber holen ==&lt;br /&gt;
Canon bietet dankenswerterweise generische Treiber: &amp;lt;br&amp;gt;&lt;br /&gt;
http://de.software.canon-europe.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
tar.gz runterladen und auspacken.&lt;br /&gt;
&lt;br /&gt;
Für hp gibt es unter der Sammelbezeichnung &#039;&#039;&#039;hplip&#039;&#039;&#039; eine große Menge Treiber für alle gängigen Modelle&lt;br /&gt;
&lt;br /&gt;
http://sourceforge.net/projects/hplip/files/hplip/&lt;br /&gt;
&lt;br /&gt;
== Treiber installieren ==&lt;br /&gt;
=== Canon===&lt;br /&gt;
Da es nur rpms gibt, müssen die Treiber erst in Debianpakete verwandelt werden.&lt;br /&gt;
Dafür gibt es das praktische Tool &#039;alien&#039;:&lt;br /&gt;
 # alien --script cnijfilter-common-2.70-2.i386.rpm&lt;br /&gt;
 # alien --script cnijfilter-ip4300-2.70-2.i386.rpm&lt;br /&gt;
Nun die Treiber nach altbekannter Manier installieren:&lt;br /&gt;
 # dpkg -i cnijfilter-common-2.70-3.i386.deb&lt;br /&gt;
 # dpkg -i cnijfilter-ip4300-2.70-3.i386.deb&lt;br /&gt;
Cups (bzw. das Paket &#039;&#039;&#039;cups&#039;&#039;&#039;) installieren und Daemon starten. Cups lauscht per&lt;br /&gt;
default auf Port 631, also im Browser&lt;br /&gt;
 http://localhost:631&lt;br /&gt;
eingeben. Unter &amp;quot;Administration -&amp;gt; add printer&amp;quot; bekommt man eine Auswahlmaske (s.u.)&lt;br /&gt;
&lt;br /&gt;
===HP===&lt;br /&gt;
Mittlerweile gibt es für hp das Debianpaket &#039;&#039;&#039;hplip&#039;&#039;&#039;. Dort sich alle gängigen Modelle enthalten.&lt;br /&gt;
&lt;br /&gt;
 # apt install hplip&lt;br /&gt;
&lt;br /&gt;
(Wer sich im o.g. Link die neueste Version heruntergeladen hat, muss diese mittels&lt;br /&gt;
 hplip-&amp;lt;version&amp;gt;.run installieren.)&lt;br /&gt;
&lt;br /&gt;
Falls im laufenden Betrieb Schwierigkeiten entstehen sollten, sollte man überprüfen, ob folgende Pakete vorhanden sind.&lt;br /&gt;
* libcups2-dev&lt;br /&gt;
* libcupsimage2-dev&lt;br /&gt;
&lt;br /&gt;
Für Debugging / manuelle Einstellungen gibt es eine Reihe hp-* commands, auf die hier aber nicht näher eingegangen wird, da wir den Drucker direkt  im CUPS einrichten.&lt;br /&gt;
&lt;br /&gt;
Für das Auswählen von weiteren Optionen (scan, fax,...) sollten auch folgende Pakete installiert sein:&lt;br /&gt;
* xsane&lt;br /&gt;
* libsane-dev&lt;br /&gt;
* python-imaging&lt;br /&gt;
* python3-pyqt4&lt;br /&gt;
* python-notify&lt;br /&gt;
&lt;br /&gt;
Drucker im CUPS einrichten. Die Web-GUI lauscht auf Port 631, also in Browser http://localhost:631 eingeben.&lt;br /&gt;
Unter &amp;quot;Administration&amp;quot;  Add Printer auswählen.&lt;br /&gt;
&lt;br /&gt;
==  Cups konfigurieren ==&lt;br /&gt;
 Name:&lt;br /&gt;
 Location:&lt;br /&gt;
 Description:&lt;br /&gt;
Hier ist es ziemlich egal, was man einträgt, ich würde aber keinen zu langen&lt;br /&gt;
Namen wählen.&lt;br /&gt;
 Continue -&amp;gt; Device:&lt;br /&gt;
Hier sollte im Drop-Down-Menü bereits der richtige Name erscheinen, in dem Fall&lt;br /&gt;
&amp;quot;Canon iP4300 USB #1 (Canon iP4300)&amp;quot; Ist das nicht der Fall, überprüfen, ob man&lt;br /&gt;
den richtigen Treiber erwischt hat!&lt;br /&gt;
 Continue -&amp;gt; Model (oder Make):&lt;br /&gt;
Dort sollte &#039;Canon&#039;, &#039;HP&#039; usw. erscheinen&amp;lt;br&amp;gt;&lt;br /&gt;
Alternativ den Punkt &#039;Provide a PPD File&#039; wählen:&amp;lt;br&amp;gt;&lt;br /&gt;
Diese Dateien befinden sich bei Canon unter /usr/share/cups/model&lt;br /&gt;
&lt;br /&gt;
Wer den genauen Namen seines Druckers angeben will, kann diesen per USB anschliessen und mit&lt;br /&gt;
 lsusb&lt;br /&gt;
ermitteln.&lt;br /&gt;
&lt;br /&gt;
Fertig. Unter &amp;quot;Printers&amp;quot; kann man sich den Drucker anschauen u. gegebenenfalls&lt;br /&gt;
per &amp;quot;modify&amp;quot; Änderungen vornehmen. Der Punkt &amp;quot;Set default printer&amp;quot; ermöglicht&lt;br /&gt;
ein Drucken per &#039;lpr&#039; ohne Setzen einer -P-Option.&lt;br /&gt;
&#039;&#039;&#039;Achtung:&#039;&#039;&#039; Falls z.B. nach einem versuchten Probedruck folgende Meldung&lt;br /&gt;
erscheint:&lt;br /&gt;
 &amp;quot;Unable to start filter &amp;quot;pstocanonij&amp;quot; - No such file or directory.&amp;quot;&lt;br /&gt;
muss diese im Verzeichnis &lt;br /&gt;
/usr/lib/cups/filter hinterlegt werden. Danach den Drucker nochmal neu einrichten und sich mit&lt;br /&gt;
einer Testseite davon überzeugen, dass alles geklappt hat.&lt;br /&gt;
&lt;br /&gt;
Es besteht auch die Möglichkeit, das ppd File manuell nachzubearbeiten, z.B. Default Size von letter auf A4 umstellen.&lt;br /&gt;
Die vorhin von CUPS generierte Datei befindet sich unter&lt;br /&gt;
 /etc/cups/ppd/&lt;br /&gt;
&lt;br /&gt;
==Debugging==&lt;br /&gt;
Wenn nicht alles so läuft, wie es soll, kann man über die Web-GUI Debugging einschalten (Administration -&amp;gt; server -&amp;gt; Save debugging information for troubleshooting)&lt;br /&gt;
Dann nochmal drucken und in die Logfiles schauen. Nicht vergessen, diese Option später wieder rauszunehmen, da diese ein Vielfaches an Output erzeugt.&lt;br /&gt;
&lt;br /&gt;
==Known Bugs==&lt;br /&gt;
&lt;br /&gt;
=== Drucker stoppt selbständig ===&lt;br /&gt;
Manchmal &amp;quot;hängt&amp;quot; ein Drucker einfach bzw. liefert keinen Output. In der Übersicht im Webfrontend sieht man dann &lt;br /&gt;
 &#039;&#039;Paused - &amp;quot;File &amp;quot;/usr/lib/cups/filter/hpcups&amp;quot; not available: No such file or directory&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In diesem Fall geht man auf die Printer-Übersicht und klickt bei Maintenance den Punkt &amp;quot;Resume Printer&amp;quot; an.&lt;br /&gt;
&lt;br /&gt;
=== Fehlende Library ===&lt;br /&gt;
&lt;br /&gt;
Wenn im Debug Output (s.o.) das Fehlen einer Datei libpng.so.3 angemeckert wird oder ein Output erscheint wie&lt;br /&gt;
 &#039;&#039;/usr/local/bin/cifip4300: error while loading shared libraries: /usr/lib/libpng.so.3: invalid ELF header&#039;&#039;&lt;br /&gt;
muss diese Datei richtig verlinkt werden -&amp;gt; Mit dpkg ermitteln, welche libpng gerade installiert ist und den entsprechenden Link ins /lib-Verzeichnis setzen:&lt;br /&gt;
 # ln -s /lib/libpng12.so.0 libpng.so.3&lt;br /&gt;
&lt;br /&gt;
== Alternative KDE ==&lt;br /&gt;
Sobald man den Drcuker per USB angeschlossen hat, erscheint ein POP-Up-Fenster, in dem man die Druckereinstellungen modifizieren kann falls nötig.&lt;br /&gt;
&lt;br /&gt;
Wenn es später nochmal nötig wird, Veränderungen vorzunehmen, kann man dies mithilfe&lt;br /&gt;
 configure-printer &amp;lt;printername&amp;gt;&lt;br /&gt;
tun.&lt;br /&gt;
Den Druckernamen erfährt man per&lt;br /&gt;
 lsusb&lt;br /&gt;
Wer nur einen Drcuker angeschlossen hat, kann auch&lt;br /&gt;
 configure-rpinter default&lt;br /&gt;
versuchen.&lt;br /&gt;
&lt;br /&gt;
== Alternative Gutenprint ==&lt;br /&gt;
Falls die Treiber des Herstellers Probleme machen, kann man auch (den in cups enthaltenen) Gutenprint-Treiber verwenden. Dazu im Webfrontend (localhost:631) den Drucker anwählen und unter &amp;quot;Administration-&amp;gt;modify&amp;quot; das Modell des Herstellers + gutenprint wählen, also z.B. &amp;quot;Canon Pixma iP4300 - CUPS+Gutenprint v5.2.9&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Alternative Turpoprint ==&lt;br /&gt;
Dieses ist ein kommerzielles Programm für Linux-Druckertreiber. Kostenpunkt: ca. 30&lt;br /&gt;
€ (Stand März 2011): (http://www.turboprint.de/)&lt;br /&gt;
&lt;br /&gt;
Hier ist nur die Demoversion beschrieben. Diese hat den Nachteil, dass auf jedem&lt;br /&gt;
Ausdruck ein hässliches Turboprint-Label erscheint.&amp;lt;br&amp;gt;&lt;br /&gt;
turboprint herunterladen, auspacken, installieren (./setup-Befehl ausführen),&lt;br /&gt;
danach gibt es die Möglichkeit, eine Testseite zu drucken. &amp;lt;br&amp;gt;&lt;br /&gt;
In den Ordner /usr/share/turboprint/ppd wechseln, dort das zum Drucker passende&lt;br /&gt;
file auswählen und in den Cups-Ordner kopieren, also in meinem Fall:&lt;br /&gt;
 # cd /usr/share/turboprint/ppd&lt;br /&gt;
 # cp Canon_PIXMA_iP4300.ppd /usr/share/cups/model/&lt;br /&gt;
Dann im Cups ebendiese Datei auswählen&lt;br /&gt;
 -&amp;gt; Add printer&lt;br /&gt;
Das ist natürlich ein sehr spezifisches Beispiel, aber das Funktionsprinzip ist&lt;br /&gt;
ähnlich für fast alle Drucker.&lt;br /&gt;
&lt;br /&gt;
Zuguterletzt noch ein paar Tipps zum Finetuning des Druckers:&amp;lt;br&amp;gt;&lt;br /&gt;
http://wiki.ubuntuusers.de/Canon-Drucker&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=File:Screenshot_cups_2025-08-18.png&amp;diff=641</id>
		<title>File:Screenshot cups 2025-08-18.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=File:Screenshot_cups_2025-08-18.png&amp;diff=641"/>
		<updated>2025-08-18T15:12:15Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Drucker_einrichten_mit_CUPS&amp;diff=640</id>
		<title>Drucker einrichten mit CUPS</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Drucker_einrichten_mit_CUPS&amp;diff=640"/>
		<updated>2025-08-18T15:01:59Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Cups konfigurieren */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Ausgangssituation=&lt;br /&gt;
OS: Debian &amp;gt;= buster&amp;lt;br&amp;gt;&lt;br /&gt;
Drucker: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;Canon Pixma iP4300&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;HP Deskjet 1000&amp;lt;br&amp;gt;&lt;br /&gt;
Server: Cups&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Arbeitsschritte =&lt;br /&gt;
== Im Vorfeld ==&lt;br /&gt;
Bevor man sich einen Drucker zulegt, will man vielleicht unter&lt;br /&gt;
http://www.openprinting.org/printers&lt;br /&gt;
überprüfen, ob das gute Stück auch von Linux unterstützt wird.&lt;br /&gt;
==  Treiber holen ==&lt;br /&gt;
Canon bietet dankenswerterweise generische Treiber: &amp;lt;br&amp;gt;&lt;br /&gt;
http://de.software.canon-europe.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
tar.gz runterladen und auspacken.&lt;br /&gt;
&lt;br /&gt;
Für hp gibt es unter der Sammelbezeichnung &#039;&#039;&#039;hplip&#039;&#039;&#039; eine große Menge Treiber für alle gängigen Modelle&lt;br /&gt;
&lt;br /&gt;
http://sourceforge.net/projects/hplip/files/hplip/&lt;br /&gt;
&lt;br /&gt;
== Treiber installieren ==&lt;br /&gt;
=== Canon===&lt;br /&gt;
Da es nur rpms gibt, müssen die Treiber erst in Debianpakete verwandelt werden.&lt;br /&gt;
Dafür gibt es das praktische Tool &#039;alien&#039;:&lt;br /&gt;
 # alien --script cnijfilter-common-2.70-2.i386.rpm&lt;br /&gt;
 # alien --script cnijfilter-ip4300-2.70-2.i386.rpm&lt;br /&gt;
Nun die Treiber nach altbekannter Manier installieren:&lt;br /&gt;
 # dpkg -i cnijfilter-common-2.70-3.i386.deb&lt;br /&gt;
 # dpkg -i cnijfilter-ip4300-2.70-3.i386.deb&lt;br /&gt;
Cups (bzw. das Paket &#039;&#039;&#039;cups&#039;&#039;&#039;) installieren und Daemon starten. Cups lauscht per&lt;br /&gt;
default auf Port 631, also im Browser&lt;br /&gt;
 http://localhost:631&lt;br /&gt;
eingeben. Unter &amp;quot;Administration -&amp;gt; add printer&amp;quot; bekommt man eine Auswahlmaske (s.u.)&lt;br /&gt;
&lt;br /&gt;
===HP===&lt;br /&gt;
Mittlerweile gibt es für hp das Debianpaket &#039;&#039;&#039;hplip&#039;&#039;&#039;. Dort sich alle gängigen Modelle enthalten.&lt;br /&gt;
&lt;br /&gt;
 # apt install hplip&lt;br /&gt;
&lt;br /&gt;
(Wer sich im o.g. Link die neueste Version heruntergeladen hat, muss diese mittels&lt;br /&gt;
 hplip-&amp;lt;version&amp;gt;.run installieren.)&lt;br /&gt;
&lt;br /&gt;
Falls im laufenden Betrieb Schwierigkeiten entstehen sollten, sollte man überprüfen, ob folgende Pakete vorhanden sind.&lt;br /&gt;
* libcups2-dev&lt;br /&gt;
* libcupsimage2-dev&lt;br /&gt;
&lt;br /&gt;
Für Debugging / manuelle Einstellungen gibt es eine Reihe hp-* commands, auf die hier aber nicht näher eingegangen wird, da wir den Drucker direkt  im CUPS einrichten.&lt;br /&gt;
&lt;br /&gt;
Für das Auswählen von weiteren Optionen (scan, fax,...) sollten auch folgende Pakete installiert sein:&lt;br /&gt;
* xsane&lt;br /&gt;
* libsane-dev&lt;br /&gt;
* python-imaging&lt;br /&gt;
* python3-pyqt4&lt;br /&gt;
* python-notify&lt;br /&gt;
&lt;br /&gt;
Drucker im CUPS einrichten. Die Web-GUI lauscht auf Port 631, also in Browser http://localhost:631 eingeben.&lt;br /&gt;
Unter &amp;quot;Administration&amp;quot;  Add Printer auswählen.&lt;br /&gt;
&lt;br /&gt;
==  Cups konfigurieren ==&lt;br /&gt;
 Name:&lt;br /&gt;
 Location:&lt;br /&gt;
 Description:&lt;br /&gt;
Hier ist es ziemlich egal, was man einträgt, ich würde aber keinen zu langen&lt;br /&gt;
Namen wählen.&lt;br /&gt;
 Continue -&amp;gt; Device:&lt;br /&gt;
Hier sollte im Drop-Down-Menü bereits der richtige Name erscheinen, in dem Fall&lt;br /&gt;
&amp;quot;Canon iP4300 USB #1 (Canon iP4300)&amp;quot; Ist das nicht der Fall, überprüfen, ob man&lt;br /&gt;
den richtigen Treiber erwischt hat!&lt;br /&gt;
 Continue -&amp;gt; Model (oder Make):&lt;br /&gt;
Dort sollte &#039;Canon&#039;, &#039;HP&#039; usw. erscheinen&amp;lt;br&amp;gt;&lt;br /&gt;
Alternativ den Punkt &#039;Provide a PPD File&#039; wählen:&amp;lt;br&amp;gt;&lt;br /&gt;
Diese Dateien befinden sich bei Canon unter /usr/share/cups/model&lt;br /&gt;
&lt;br /&gt;
Wer den genauen Namen seines Druckers angeben will, kann diesen per USB anschliessen und mit&lt;br /&gt;
 lsusb&lt;br /&gt;
ermitteln.&lt;br /&gt;
&lt;br /&gt;
Fertig. Unter &amp;quot;Printers&amp;quot; kann man sich den Drucker anschauen u. gegebenenfalls&lt;br /&gt;
per &amp;quot;modify&amp;quot; Änderungen vornehmen. Der Punkt &amp;quot;Set default printer&amp;quot; ermöglicht&lt;br /&gt;
ein Drucken per &#039;lpr&#039; ohne Setzen einer -P-Option.&lt;br /&gt;
&#039;&#039;&#039;Achtung:&#039;&#039;&#039; Falls z.B. nach einem versuchten Probedruck folgende Meldung&lt;br /&gt;
erscheint:&lt;br /&gt;
 &amp;quot;Unable to start filter &amp;quot;pstocanonij&amp;quot; - No such file or directory.&amp;quot;&lt;br /&gt;
muss diese im Verzeichnis &lt;br /&gt;
/usr/lib/cups/filter hinterlegt werden. Danach den Drucker nochmal neu einrichten und sich mit&lt;br /&gt;
einer Testseite davon überzeugen, dass alles geklappt hat.&lt;br /&gt;
&lt;br /&gt;
Es besteht auch die Möglichkeit, das ppd File manuell nachzubearbeiten, z.B. Default Size von letter auf A4 umstellen.&lt;br /&gt;
Die vorhin von CUPS generierte Datei befindet sich unter&lt;br /&gt;
 /etc/cups/ppd/&lt;br /&gt;
&lt;br /&gt;
==Debugging==&lt;br /&gt;
Wenn nicht alles so läuft, wie es soll, kann man über die Web-GUI Debugging einschalten (Administration -&amp;gt; server -&amp;gt; Save debugging information for troubleshooting)&lt;br /&gt;
Dann nochmal drucken und in die Logfiles schauen. Nicht vergessen, diese Option später wieder rauszunehmen, da diese ein Vielfaches an Output erzeugt.&lt;br /&gt;
&lt;br /&gt;
==Known Bugs==&lt;br /&gt;
&lt;br /&gt;
=== Drucker stoppt selbständig ===&lt;br /&gt;
Manchmal &amp;quot;hängt&amp;quot; ein Drucker einfach bzw. liefert keinen Output. In der Übersicht im Webfrontend sieht man dann &lt;br /&gt;
 &#039;&#039;Paused - &amp;quot;File &amp;quot;/usr/lib/cups/filter/hpcups&amp;quot; not available: No such file or directory&amp;quot;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In diesem Fall geht man auf die Printer-Übersicht und klickt bei Maintenance den Punkt &amp;quot;Resume Printer&amp;quot; an.&lt;br /&gt;
&lt;br /&gt;
=== Fehlende Library ===&lt;br /&gt;
&lt;br /&gt;
Wenn im Debug Output (s.o.) das Fehlen einer Datei libpng.so.3 angemeckert wird oder ein Output erscheint wie&lt;br /&gt;
 &#039;&#039;/usr/local/bin/cifip4300: error while loading shared libraries: /usr/lib/libpng.so.3: invalid ELF header&#039;&#039;&lt;br /&gt;
muss diese Datei richtig verlinkt werden -&amp;gt; Mit dpkg ermitteln, welche libpng gerade installiert ist und den entsprechenden Link ins /lib-Verzeichnis setzen:&lt;br /&gt;
 # ln -s /lib/libpng12.so.0 libpng.so.3&lt;br /&gt;
&lt;br /&gt;
== Alternative Gutenprint ==&lt;br /&gt;
Falls die Treiber des Herstellers Probleme machen, kann man auch (den in cups enthaltenen) Gutenprint-Treiber verwenden. Dazu im Webfrontend (localhost:631) den Drucker anwählen und unter &amp;quot;Administration-&amp;gt;modify&amp;quot; das Modell des Herstellers + gutenprint wählen, also z.B. &amp;quot;Canon Pixma iP4300 - CUPS+Gutenprint v5.2.9&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Alternative Turpoprint ==&lt;br /&gt;
Dieses ist ein kommerzielles Programm für Linux-Druckertreiber. Kostenpunkt: ca. 30&lt;br /&gt;
€ (Stand März 2011): (http://www.turboprint.de/)&lt;br /&gt;
&lt;br /&gt;
Hier ist nur die Demoversion beschrieben. Diese hat den Nachteil, dass auf jedem&lt;br /&gt;
Ausdruck ein hässliches Turboprint-Label erscheint.&amp;lt;br&amp;gt;&lt;br /&gt;
turboprint herunterladen, auspacken, installieren (./setup-Befehl ausführen),&lt;br /&gt;
danach gibt es die Möglichkeit, eine Testseite zu drucken. &amp;lt;br&amp;gt;&lt;br /&gt;
In den Ordner /usr/share/turboprint/ppd wechseln, dort das zum Drucker passende&lt;br /&gt;
file auswählen und in den Cups-Ordner kopieren, also in meinem Fall:&lt;br /&gt;
 # cd /usr/share/turboprint/ppd&lt;br /&gt;
 # cp Canon_PIXMA_iP4300.ppd /usr/share/cups/model/&lt;br /&gt;
Dann im Cups ebendiese Datei auswählen&lt;br /&gt;
 -&amp;gt; Add printer&lt;br /&gt;
Das ist natürlich ein sehr spezifisches Beispiel, aber das Funktionsprinzip ist&lt;br /&gt;
ähnlich für fast alle Drucker.&lt;br /&gt;
&lt;br /&gt;
Zuguterletzt noch ein paar Tipps zum Finetuning des Druckers:&amp;lt;br&amp;gt;&lt;br /&gt;
http://wiki.ubuntuusers.de/Canon-Drucker&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=639</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=639"/>
		<updated>2025-07-31T14:07:11Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* pvc */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt;//var/lib/mysql&amp;gt; (--claim-class ... --claim-mode RWX|RWO --claim-size 1G)&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=638</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=638"/>
		<updated>2025-07-31T14:06:20Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Add volume to deployment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt;/var/lib/mysql&amp;gt; (--claim-class ... --claim-mode RWX|RWO --claim-size 1G)&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=637</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=637"/>
		<updated>2025-07-31T06:51:26Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Create ingress for service */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt;/var/lib/mysql&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
=== Create ingress for service ===&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=636</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=636"/>
		<updated>2025-07-31T06:41:15Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Creating */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
== Create configmap from file ==&lt;br /&gt;
 $ oc create configmap &amp;lt;mymap&amp;gt; --from-file=/tmp/dump.sql&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt;/var/lib/mysql&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
== Create ingress for service ==&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=635</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=635"/>
		<updated>2025-07-25T13:08:51Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Add volume to deployment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== pvc ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type pvc --name &amp;lt;mypvc-vol&amp;gt; --claim-name &amp;lt;mypvc&amp;gt; --mount-path &amp;lt;/var/lib/mysql&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
== Create ingress for service ==&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=634</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=634"/>
		<updated>2025-07-25T11:54:47Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* configmap */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 $ oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
== Create ingress for service ==&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=633</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=633"/>
		<updated>2025-07-25T11:54:33Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Add volume to deployment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
 oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
== Create ingress for service ==&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=632</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=632"/>
		<updated>2025-07-25T11:54:19Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Create Job from image */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add volume to deployment ==&lt;br /&gt;
&lt;br /&gt;
=== configmap ===&lt;br /&gt;
oc set volume deployment/&amp;lt;mydeployment&amp;gt; --add --type configmap --name &amp;lt;myvol&amp;gt; --configmap-name &amp;lt;mymap&amp;gt; --mount-path &amp;lt;/var/www/html&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
== Create ingress for service ==&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=631</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=631"/>
		<updated>2025-07-25T11:13:23Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Create service from deployment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; (--selector app=&amp;lt;myapp&amp;gt;) --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
== Create ingress for service ==&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=630</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=630"/>
		<updated>2025-07-23T13:26:37Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Creating */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
&lt;br /&gt;
=== Create service from deployment  ===&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; --port 8080 --target-port 8080&lt;br /&gt;
&lt;br /&gt;
=== Create route from service ===&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternative ingress:&lt;br /&gt;
&lt;br /&gt;
== Create ingress for service ==&lt;br /&gt;
&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=WLAN_einrichten&amp;diff=629</id>
		<title>WLAN einrichten</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=WLAN_einrichten&amp;diff=629"/>
		<updated>2025-07-15T19:09:04Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Auf Grund wochenlangen Ärgernisses mit WLAN, Linux (Kanotix), WPA und einem&lt;br /&gt;
Sitecom-USB-Stick (die Mischung macht&#039;s!) entstand dieser Wiki-Artikel.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;blue&amp;quot;&amp;gt;* Mittlerweile gibt es komfortable Tools wie den [https://wiki.debian.org/NetworkManager Network-Manager] * &amp;lt;/font&amp;gt;, aber wer gerne selbst Hand anlegt, kann das hier tun.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* https://www.makeuseof.com/connect-to-wifi-with-nmcli/&lt;br /&gt;
&lt;br /&gt;
== Schritt 1: Der richtige Treiber ==&lt;br /&gt;
&lt;br /&gt;
Hierfür googelt ihr nach dem Chipsatz eures WLAN-Sticks - in meinem Fall handelt&lt;br /&gt;
es sich um &amp;quot;zydas&amp;quot;. Anhand dieser Angabe sucht ihr nach einem passenden Treiber.&lt;br /&gt;
Fündig wird man in meinem konkreten Fall unter:&lt;br /&gt;
&lt;br /&gt;
http://sourceforge.net/projects/zd1211/&lt;br /&gt;
&lt;br /&gt;
== Schritt 2: Treiber installieren ==&lt;br /&gt;
&lt;br /&gt;
=== Kernel &amp;gt;= 3.0 ===&lt;br /&gt;
Im allgemeinen sind die benötigten Treiber vorhanden. Mit&lt;br /&gt;
 # ip a&lt;br /&gt;
lässt sich einfach überprüfen, ob eine neue Netzwerkkarte (i.a. wlan0) dazugekommen ist.&lt;br /&gt;
Falls nicht, mit den Schritten für die älteren Kernel weitermachen bzw. erstmal schauen, ob es den Treiber als fertiges Paket gibt.&lt;br /&gt;
&lt;br /&gt;
=== Kernel &amp;gt;= 2.6.26 ===&lt;br /&gt;
zd1211-firmware-1.4.tar.bz2 von der sourceforge-Seite runterladen, die Firmware&lt;br /&gt;
auspacken, README lesen und Anweisungen befolgen, also alle zd1211*-files nach&lt;br /&gt;
/lib/firmware/zd1211 kopieren. Anschließend entsprechendes Modul neu laden:&lt;br /&gt;
 # rmmod zd1211rw&lt;br /&gt;
 # modprobe zd1211rw&lt;br /&gt;
Mittels Kommando &amp;quot;lsmod&amp;quot; kann man nachkontrollieren, ob das Treibmodul&lt;br /&gt;
(zd1211rw) geladen wurde. Im Notfall manuell (modprobe zd1211rw) nachholen&lt;br /&gt;
(dafür muss man allerdings wissen, wie das gute Stück heisst, im Notfall hilft&lt;br /&gt;
ein plumper Reboot :). Mittlerweile hat Sitecom ja auf Ralink umgestellt, habe&lt;br /&gt;
leider keine Erfahrung damit. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Wer sich über einen komischen Interface-Namen &amp;quot;wmaster0&amp;quot; wundert: &amp;lt;br&amp;gt;&lt;br /&gt;
Dieser ist Teil des mac80211 Frameworks und wird bis Kernel 2.6.32 benötigt.&lt;br /&gt;
Im Standardkernel ist dieses Modul vorhanden und wird wie folgt angezeigt&lt;br /&gt;
(natürlich in Abhängigkeit vom Chipsatz der WLAN-Karte):&lt;br /&gt;
 mac80211 139776 1 zd1211rw&lt;br /&gt;
Das wmaster-Interface ist nur ein Hilfskonstrukt und sollte manuell NICHT&lt;br /&gt;
angefasst werden. Konfiguriert wird nur das eigentliche Interface (z.B. wlan0).&lt;br /&gt;
Weitere Informationen sind hier&lt;br /&gt;
[http://linuxwireless.org/en/developers/Documentation/mac80211 ] nachzulesen.&lt;br /&gt;
&lt;br /&gt;
=== Ältere Kernel (getestet am Beispiel 2.6.19.7)===&lt;br /&gt;
Meistens kommt der Treiber als .tar.gz oder .tgz-Datei daher: Runterladen und&lt;br /&gt;
Auspacken:&lt;br /&gt;
 $ tar -xzvf zd1211-driver-r69.tar.gz&lt;br /&gt;
&lt;br /&gt;
Ins entpackte Verzeichnis wechseln:&lt;br /&gt;
 $ cd zd1211-driver-r69&lt;br /&gt;
und im README (sofern vorhanden) lesen, was zu tun ist. Meist läuft es auf&lt;br /&gt;
Kompilieren und Installieren raus:&lt;br /&gt;
 $ make&lt;br /&gt;
 # make install&lt;br /&gt;
&lt;br /&gt;
Die übrigen Schritte sind dann (fast) unabhängig vom Treiber.&lt;br /&gt;
&lt;br /&gt;
==Schritt 3: Netzwerkkartenbezeichner rausfinden und Interface aktivieren ==&lt;br /&gt;
 # ifconfig -a&lt;br /&gt;
ausführen und schauen, was dazugekommen ist. Ein gängiger Interfacename ist&lt;br /&gt;
wlan0 oder eth[nächste freie Nummer].&lt;br /&gt;
&lt;br /&gt;
Dann wird das Interface aktiviert. Der Einfachheit halber benenne ich dieses im&lt;br /&gt;
weiteren Verlauf mit &amp;quot;wlan0&amp;quot;.&lt;br /&gt;
 # ifconfig wlan0 up&lt;br /&gt;
&lt;br /&gt;
==Schritt 4: SSID und Verschlüsselung konfigurieren ==&lt;br /&gt;
Danach gilt es, die SSID herauszufinden:&lt;br /&gt;
 # iwlist wlan0 scan&lt;br /&gt;
Falls ihr den SSID-Broadcast auf eurem WLAN-Router deaktiviert habt, müsst ihr&lt;br /&gt;
die SSID später eben von Hand eintragen.&lt;br /&gt;
&lt;br /&gt;
Anmerkung zu iwconfig/iwlist: Falls es diesen Befehl auf eurem System noch nicht&lt;br /&gt;
gibt, müsst ihr euch die wlan-tools besorgen: &amp;lt;br&amp;gt;&lt;br /&gt;
http://pcmcia-cs.sourceforge.net/ftp/contrib/wireless_tools.28.tar.gz &amp;lt;br&amp;gt;&lt;br /&gt;
(Gibts mittlerweile für Debian auch als Paket (wireless-tools))&amp;lt;br&amp;gt;&lt;br /&gt;
Was die Verschlüsselung angeht, werde ich zunächst auf WEP, danach auf&lt;br /&gt;
WPA-PSK/TKIP und WPA2 eingehen.&lt;br /&gt;
&lt;br /&gt;
===WEP-Verschlüsselung===&lt;br /&gt;
Falls ihr mehrere Access-Points gefunden habt, könnt ihr mit&lt;br /&gt;
 # iwconfig wlan0 essid &#039;mein WLAN-Netz&#039;&lt;br /&gt;
(dort kommt natürlich der String rein, den ihr mit &amp;quot;iwlist wlan0 scan&amp;quot;&lt;br /&gt;
ermittelt habt)&lt;br /&gt;
euer WLAN-Netz wählen.&amp;lt;br&amp;gt;&lt;br /&gt;
Danach wird der WEP-Key konfiguriert:&lt;br /&gt;
 # iwconfig wlan0 key &amp;lt;schluessel_als_hexzahl&amp;gt; oder &amp;amp;lt;s:schluessel_als_ascii&amp;gt;&lt;br /&gt;
z.B. iwconfig wlan0 key 46463153715A394E6741634652&lt;br /&gt;
&lt;br /&gt;
===WPA-Verschlüsselung===&lt;br /&gt;
Zunächst mal rausfinden, um was für eine Verschlüsselung es sich handelt oder&lt;br /&gt;
sie bei völliger Ahnungslosigkeit experimentell ermitteln. Mögliche Algorithmen&lt;br /&gt;
sind TKIP oder AES. In diesem Fall handelt es sich um WPA-PSK (preshared&lt;br /&gt;
key) und TKIP.&lt;br /&gt;
&lt;br /&gt;
Ich konfiguriere das ganze mit &amp;quot;wpa_supplicant&amp;quot;. Mit iwpriv konnte ich nix&lt;br /&gt;
anfangen. Zunächst das Paket &amp;quot;wpasupplicant&amp;quot; installieren, falls noch nicht&lt;br /&gt;
vorhanden. Dann eine Passphrase erzeugen mit:&lt;br /&gt;
 $ wpa_passphrase &amp;lt;ssid&amp;gt; &amp;lt;passphrase&amp;gt;&lt;br /&gt;
wobei ssid die oben ermittelte SSID ist und passphrase der WPA-Key des&lt;br /&gt;
Access-Points. Ein möglicher Eintrag könnte also lauten:&lt;br /&gt;
 $ wpa_passphrase &#039;FRITZ!Box Fon WLAN 7170&#039; 0123456789101112&lt;br /&gt;
&lt;br /&gt;
Danach eine Datei /etc/wpa_supplicant.conf erzeugen. Dort werden die ganzen&lt;br /&gt;
Parameter eingetragen, die man vorher ermittelt hat (SSID, Verschlüsselung,&lt;br /&gt;
Passphrase). Hier eine Beispieldatei:&lt;br /&gt;
 ctrl_interface=/var/run/wpa_supplicant eapol_version=1 ap_scan=2&lt;br /&gt;
 network={&lt;br /&gt;
 ssid=&amp;quot;FRITZ!Box Fon WLAN 7170&amp;quot;&lt;br /&gt;
 scan_ssid=1&lt;br /&gt;
 proto=WPA&lt;br /&gt;
 key_mgmt=WPA-PSK&lt;br /&gt;
 pairwise=TKIP&lt;br /&gt;
 group=TKIP&lt;br /&gt;
 psk=a97ee0751b63fc7493fc6bfd40c6662910cba5669e9a2ea727eb78d86832d19d&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
===WPA2===&lt;br /&gt;
Nachdem WEP und WPA1 unsicher resp. entschlüsselbar sind, empfiehlt es sich, am&lt;br /&gt;
Router WPA2 zu aktivieren. Die wpa_supplicant.conf sieht in einem solchen Fall&lt;br /&gt;
so aus:&lt;br /&gt;
 ctrl_interface=/var/run/wpa_supplicant eapol_version=1 ap_scan=1&lt;br /&gt;
 network={ ssid=&amp;quot;FRITZ!Box Fon WLAN 7170&amp;quot;&lt;br /&gt;
 scan_ssid=1&lt;br /&gt;
 proto=RSN&lt;br /&gt;
 key_mgmt=WPA-PSK&lt;br /&gt;
 pairwise=CCMP&lt;br /&gt;
 group=TKIP CCMP&lt;br /&gt;
 psk=a97ee0751b63fc7493fc6bfd40c6662910cba5669e9a2ea727eb78d86832d19d&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Die nächsten Schritte sind dann wieder für &#039;&#039;&#039;WPA&#039;&#039;&#039; und &#039;&#039;&#039;WPA2&#039;&#039;&#039; gemeinsam:&lt;br /&gt;
Nach Anpassung der Cofigdateiden wpa_supplicant-Befehl auf der Shell ausführen:&lt;br /&gt;
 # wpa_supplicant -B -i wlan0 -D wext -c /etc/wpa_supplicant.conf -d&lt;br /&gt;
Ein möglicher Output sieht so aus:&lt;br /&gt;
 Initializing interface &#039;wlan0&#039; conf &#039;/etc/wpa_supplicant.conf&#039; driver &#039;wext&#039;&lt;br /&gt;
 ctrl_interface &#039;N/A&#039; bridge &#039;N/A&#039;&lt;br /&gt;
 Configuration file &#039;/etc/wpa_supplicant.conf&#039; -&amp;gt; &#039;/etc/wpa_supplicant.conf&#039;&lt;br /&gt;
 Reading configuration file &#039;/etc/wpa_supplicant.conf&#039;&lt;br /&gt;
 ctrl_interface=&#039;/var/run/wpa_supplicant eapol_version=1 ap_scan=2&#039;&lt;br /&gt;
 Priority group 0&lt;br /&gt;
 id=0 ssid=&#039;Fritz!Box xyz123&#039;&lt;br /&gt;
 Initializing interface (2) &#039;wlan0&#039;&lt;br /&gt;
 SIOCGIWRANGE: WE(compiled)=22 WE(source)=21 enc_capa=0xf&lt;br /&gt;
 capabilities: key_mgmt 0xf enc 0xf flags 0x0&lt;br /&gt;
 WEXT: Operstate: linkmode=1, operstate=5&lt;br /&gt;
 Own MAC address: 00:0c:f6:16:7a:bb&lt;br /&gt;
 wpa_driver_wext_set_wpa&lt;br /&gt;
 wpa_driver_wext_set_key: alg=0 key_idx=0 set_tx=0 seq_len=0 key_len=0&lt;br /&gt;
 wpa_driver_wext_set_key: alg=0 key_idx=1 set_tx=0 seq_len=0 key_len=0&lt;br /&gt;
 wpa_driver_wext_set_key: alg=0 key_idx=2 set_tx=0 seq_len=0 key_len=0&lt;br /&gt;
 wpa_driver_wext_set_key: alg=0 key_idx=3 set_tx=0 seq_len=0 key_len=0&lt;br /&gt;
 wpa_driver_wext_set_countermeasures&lt;br /&gt;
 wpa_driver_wext_set_drop_unencrypted&lt;br /&gt;
 RSN: flushing PMKID list in the driver&lt;br /&gt;
 Setting scan request: 0 sec 100000 usec&lt;br /&gt;
 EAPOL: SUPP_PAE entering state DISCONNECTED&lt;br /&gt;
 EAPOL: KEY_RX entering state NO_KEY_RECEIVE&lt;br /&gt;
 EAPOL: SUPP_BE entering state INITIALIZE&lt;br /&gt;
 EAP: EAP entering state DISABLED&lt;br /&gt;
 Using existing control interface directory.&lt;br /&gt;
 Added interface wlan0&lt;br /&gt;
&lt;br /&gt;
Bei eventuellen Fehlern und vor jedem Neustart das wpa_supplicant PID file&lt;br /&gt;
löschen und das Interface wieder hoch nehmen:&lt;br /&gt;
 # rm /var/run/wpa_supplicant*/wlan0&lt;br /&gt;
 # ifconfig wlan0 up&lt;br /&gt;
&lt;br /&gt;
== Schritt 5: IP-Adresse bekommen==&lt;br /&gt;
Falls ihr auf dem WLAN-Router DHCP eingestellt habt, bekommt ihr so eine&lt;br /&gt;
IP-Adresse:&lt;br /&gt;
 # dhclient wlan0&lt;br /&gt;
Ansonsten müsst ihr sie per &amp;quot;ifconfig&amp;quot; von Hand zuweisen.&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Unter Lenny/2.6.26.2-686 klappt das mit dem dhclient nicht so recht.&lt;br /&gt;
Stattdessen kommt die unerfreuliche Meldung:&lt;br /&gt;
 wmaster0: unknown hardware address type 801&lt;br /&gt;
Eine Alternative stellt hier das Tool &#039;&#039;&#039;wicd&#039;&#039;&#039; dar. Zur Installation desselben&lt;br /&gt;
in die sources.list folgenden Eintrag machen (für Lenny, ansonsten entsprechend&lt;br /&gt;
anpassen):&lt;br /&gt;
 deb http://www.backports.org/debian lenny-backports main contrib non-free&lt;br /&gt;
...gefolgt von den Install-Kommandos:&lt;br /&gt;
 # aptitude update&lt;br /&gt;
 # apt-get -t lenny-backports install wicd&lt;br /&gt;
Danach mit &amp;quot;wicd&amp;quot; den Daemon starten. Mit dem Aufruf &amp;quot;wicd-client&amp;quot; gelangt man&lt;br /&gt;
in ein Auswahlmenü, in dem man seine SSID wählen und sich mittels &amp;quot;Return&amp;quot; oder&lt;br /&gt;
&amp;quot;C&amp;quot;(onnect) verbinden kann.&amp;lt;br&amp;gt;&lt;br /&gt;
Größter Bug dieser Software ist wohl, dass man sie nach jedem Kernelupdate neu&lt;br /&gt;
installieren darf.&lt;br /&gt;
&lt;br /&gt;
==Schritt 6: Browser starten u. ev. Namensauflösung anpassen==&lt;br /&gt;
Jetzt solltet ihr ins Internet kommen. Falls doch nicht, könnte es an der&lt;br /&gt;
Namensauflösung liegen. Eine Kontrollmöglichkeit ist, als URL eine IP-Adresse&lt;br /&gt;
einzugeben, beispielsweise http://216.34.181.60. Landet ihr nun auf der&lt;br /&gt;
Sourceforge-Seite, wurde die Datei &amp;quot;/etc/resolv.conf&amp;quot; nicht richtig generiert.&lt;br /&gt;
Datei öffnen und eintragen:&lt;br /&gt;
 nameserver &amp;lt;IP des APs&amp;gt;&lt;br /&gt;
Die Access-Point-IP seht ihr z.B. nach einem erfolgreichen dhclient (Schritt 5).&lt;br /&gt;
Danach Browser neu starten.&lt;br /&gt;
&lt;br /&gt;
==Problembehebung==&lt;br /&gt;
=== Meine wlan-Karte wird in &amp;quot;ifconfig&amp;quot; nicht angezeigt.===&lt;br /&gt;
Der zydas-Chipsatz des Sitecom-WLAN-Sticks scheint beim Booten von Debian nicht&lt;br /&gt;
immer sauber in den Kernel geladen zu werden. Wenn trotz richtiger Konfiguration&lt;br /&gt;
keine IP-Adresse vergeben wird (langer dhclient-Timeout), muss das&lt;br /&gt;
zydas-Kernelmodul entfernt und anschließend neu geladen werden:&lt;br /&gt;
 # rmmod zd1211rw&lt;br /&gt;
 # modprobe zd1211rw&lt;br /&gt;
 # ifconfig wlan0 up&lt;br /&gt;
 # ... (Konfigurationsschritte für WEP oder WPA)&lt;br /&gt;
 # dhclient wlan0&lt;br /&gt;
(oder alternativ halt wicd-client, je nachdem...)&lt;br /&gt;
&lt;br /&gt;
=== Meine SSID wird nicht gefunden ===&lt;br /&gt;
Manche älteren wlan-Stick-Modelle können in den höheren Frequenzbändern nicht&lt;br /&gt;
senden/empfangen. In dem Fall einen niedrigeren Kanal wählen. In der Fritzbox&lt;br /&gt;
geht das z.B. über das Menu Einstellungen -&amp;gt; WLAN -&amp;gt; Funkkanal.&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=Misc_(multimedia)&amp;diff=628</id>
		<title>Misc (multimedia)</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=Misc_(multimedia)&amp;diff=628"/>
		<updated>2025-07-10T21:35:56Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* ffmpeg */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Fotos vom Smartphone übertragen ==&lt;br /&gt;
* Geht schnell und stressfrei mit &#039;&#039;&#039;jmtpfs&#039;&#039;&#039;. Dieses lässt sich als Paket installieren.&amp;lt;br&amp;gt;&lt;br /&gt;
* Das Smartphone an den Micro-USB-Port anschließen  und die Art der Datenübertragung wählen:&lt;br /&gt;
 &amp;quot;Daten übertragen (mtp)&amp;quot;&lt;br /&gt;
* Daten z.B. nach /mnt mounten:&lt;br /&gt;
 jmtpfs /mnt&lt;br /&gt;
und von dort kopieren&lt;br /&gt;
&lt;br /&gt;
Anmerkung: Sollte unter KDE folgender Fehler auftreten&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Listing raw device(s)&lt;br /&gt;
Device 0 (VID=...) is a Samsung Galaxy models (MTP).&lt;br /&gt;
Found 1 device(s):&lt;br /&gt;
Samsung: Galaxy models (MTP) (...) @ bus 1, dev 10&lt;br /&gt;
Attempting to connect device(s)&lt;br /&gt;
libusb_claim_interface() reports device is busy, likely in use by GVFS or KDE MTP device handling already&lt;br /&gt;
LIBMTP PANIC: Unable to initialize device&lt;br /&gt;
Unable to open raw device 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Kann man folgendes versuchen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 killall kiod5&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
bzw.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
killall gvfs-udisk2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
unter xfce.&lt;br /&gt;
&lt;br /&gt;
Leider ist es unter xfce offenbar nicht mehr möglich mit o.g. Tools ein Smartphone zu mounten. Eine andere Möglichkeit ist der Android Debugger &#039;&#039;&#039;adb&#039;&#039;&#039;.&amp;lt;br&amp;gt;&lt;br /&gt;
Installieren und sich den Inhalt der Speicherkarte ansehen:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo apt install adb&lt;br /&gt;
adb devices&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Es sollte mindestens ein Device angezeigt werden)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb shell&lt;br /&gt;
ls /sdcard/DCIM&lt;br /&gt;
strg+d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Außerdem muss man an seinem Android Gerät noch ein paar Einstellungen vornehmen:&lt;br /&gt;
* Developer Mode einschalten (7x auf die Serial tippen)&lt;br /&gt;
* USB Debugging einschalten (unter den Developer Options)&lt;br /&gt;
* Nach dem Einstecken des USB-Kabels Data Transfer wählen (anstatt nur Aufladen)&lt;br /&gt;
&lt;br /&gt;
Eine ausführliche Anleitung für diese Schritte gibt es hier:&amp;lt;br&amp;gt;&lt;br /&gt;
https://droidwin.com/fix-adb-device-not-found-error/&lt;br /&gt;
&lt;br /&gt;
Mit &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb pull &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
lassen sich die Dateien nun herunterziehen.&lt;br /&gt;
&lt;br /&gt;
Man kann der adb Shell auch Argumente mitgeben z.B.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
adb shell ls /sdcard/DCIM/Camera&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Annotations ===&lt;br /&gt;
* Unter xfce4 kann man Fotos am besten mit jmtpfs herunterladen. Davor muss man konkurierende Prozesse mit Zugriff auf die Kamera in einer Endlosschleife abschießen&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
while sleep 0.1; do pkill kiod5; pkill gvfs-udisks2; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
und in einer anderen Shell das Device mounten:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /tmp/cam&lt;br /&gt;
jmtpfs /tmp/cam&lt;br /&gt;
Device 0 (VID=18d1 and PID=4ee2) is a Google Inc Nexus/Pixel (MTP+ADB).&lt;br /&gt;
Android device detected, assigning default bug flags&lt;br /&gt;
ls /tmp/cam/Interner\ gemeinsamer\ Speicher/DCIM/&lt;br /&gt;
Camera  PeakLens&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Unter kde kann man sich mir dem adb (s.o.) behelfen, indem man kiod+gvfs in einer Endlosschleife abschießt (s.o.):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 while sleep 0.1; do pkill kiod5; pkill gvfs-udisks2; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Zu viele Fotos auf einmal herunterladen, bricht irgendwann ab. Hier ein paar Commands, wie man eine Auswahl heruntergeladen bekommt (e.g. Jan. 2024):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 adb shell ls /sdcard/DCIM/Camera|grep IMG_2024 &amp;gt;/tmp/img.txt &lt;br /&gt;
 for IMG in $(grep 202401 /tmp/img.txt) ; do adb pull /sdcard/DCIM/Camera/$IMG CAM/; done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Bootbaren USB-Stick erstellen ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Voraussetzung:&#039;&#039;&#039; Ein Ordner namens &#039;&#039;ISO&#039;&#039;, in dem sich bootbare Dateien (z.B. für ein Windows-Image) befinden&lt;br /&gt;
&lt;br /&gt;
Ein ISO-Image lässt sich sehr einfach mit dem Tool &#039;&#039;&#039;xorrisofs&#039;&#039;&#039; erstellen.&lt;br /&gt;
&amp;lt;br&amp;gt;Zunächst muss dieses installiert werden&lt;br /&gt;
 $ sudo apt install xorriso&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das Erzeugen des ISO geht dann wie folgt:&lt;br /&gt;
 $ xorrisofs -r -J -o ./windows10.iso ./ISO&lt;br /&gt;
&lt;br /&gt;
Das ISO-Image kann man dann mit &amp;quot;dd&amp;quot; auf den USB-Stick kopiert werden. Je nachdem, um welches Device es sich handelt, kann das dann so aussehen:&lt;br /&gt;
 # dd if=windows.iso of=/dev/sde&lt;br /&gt;
Achtung: &#039;&#039;&#039;Keine&#039;&#039;&#039; Partition erstellen/angeben !&lt;br /&gt;
&lt;br /&gt;
==Tracks aus Musik-CD extrahieren==&lt;br /&gt;
&lt;br /&gt;
 $  icedax -D &amp;lt;devicename&amp;gt; -t 1 song1.wav&lt;br /&gt;
also z.B.&lt;br /&gt;
 $ icedax -D /dev/cdrom -t 2 song2.wav&lt;br /&gt;
 $ icedax -D /dev/cdrom0 -t 3+4 mixed.wav&lt;br /&gt;
(macht aus Nr. 3 u. 4 eine große Datei)&lt;br /&gt;
&lt;br /&gt;
Wer lieber mp3s hat, kann &#039;&#039;&#039;cdda2mp3&#039;&#039;&#039; verwenden.&lt;br /&gt;
&lt;br /&gt;
==Musik-CD brennen==&lt;br /&gt;
Für eine Musik-CD wird KEIN Isoimge erzeugt!&amp;lt;br&amp;gt;&lt;br /&gt;
Stattdessen alle wav-Dateien in einem Ordner speichen, dort hineinwechseln und&lt;br /&gt;
ausführen:&lt;br /&gt;
 # cdrecord -dao *.wav&lt;br /&gt;
&lt;br /&gt;
cdrecord wird in neueren Debian-Releases durch &#039;&#039;wodim&#039;&#039; ersetzt. Beispiel:&lt;br /&gt;
&lt;br /&gt;
 $ /usr/bin/wodim -v gracetime=2 dev=/dev/sr0 speed=4 -sao driveropts=burnfree -data&lt;br /&gt;
&lt;br /&gt;
==DVD brennen==&lt;br /&gt;
 # growisofs -R -J -Z /dev/dvd /path/to/file-or-directory&lt;br /&gt;
&lt;br /&gt;
wobei -R=Rockridge, -J=Joliet und -Z=device (kann von System zu System variieren). Wer sich nicht sicher ist, kann mit -dry-run den Brennvorgang erstmal simulieren.&amp;lt;br&amp;gt;&lt;br /&gt;
Eine sehr schöne Übersicht über DVDs allg. und unter Linux gibt es hier&amp;lt;br&amp;gt;&lt;br /&gt;
http://www.rakekniven.de/linux/k-base/dvd-brennen.php&lt;br /&gt;
&lt;br /&gt;
== Von DVD extrahieren ==&lt;br /&gt;
Als brauchbar für die Erstellung eines mp4 von DVD hat sich im Test handbrake herausgestellt. Lässt sich als Paket installieren. Nur der Sound kommt etwas leise. Dieser lässt sich mit [[#ffmpeg|ffmpeg]] aufbessern (s.u.).&lt;br /&gt;
&lt;br /&gt;
== Monitor kalibrieren ==&lt;br /&gt;
&lt;br /&gt;
Sehr empfehlenswert ist das Tool DisplayCAL [https://displaycal.net/], das gängige Kalibrierungsgeräte wie z.B. Spyder, unterstützt.&amp;lt;br&amp;gt;&lt;br /&gt;
Verwendete Hardware:&amp;lt;br&amp;gt;&lt;br /&gt;
[https://www.dpreview.com/articles/3856869836/spyder3 Spyder 3]&lt;br /&gt;
&lt;br /&gt;
Hier gibt es eine umfassende Anleitung:&lt;br /&gt;
&lt;br /&gt;
https://www.reallinuxuser.com/how-to-color-calibrate-your-monitor-in-linux/&lt;br /&gt;
&lt;br /&gt;
=== Anmerkung ===&lt;br /&gt;
&lt;br /&gt;
Die Originalversion benutzt Python2 und wird daher auf den gängigen Linux-Distributionen nicht mehr unterstützt. Es gibt aber ein Nachfolgeprojekt: https://github.com/eoyilmaz/displaycal-py3&lt;br /&gt;
und darauf basierend ein Debianpaket:&lt;br /&gt;
 $ sudo apt install displaycal&lt;br /&gt;
&lt;br /&gt;
Unten genanntes wurde mit der alten Version und python2 getestet.&lt;br /&gt;
&lt;br /&gt;
=== Protipp ===&lt;br /&gt;
&lt;br /&gt;
* Wer keine guten/farbstichige Ergebnisse erzielt, kann als Whitepoint 5300K versuchen.&lt;br /&gt;
&lt;br /&gt;
[[File:Screenshot 2021-02-23 19-55-44.png|800px]]&lt;br /&gt;
&lt;br /&gt;
* Am Ende wird ein &#039;&#039;&#039;Profil&#039;&#039;&#039; (.icc Datei) erzeugt. Laden lässt sich dieses mit xcalib:&lt;br /&gt;
&lt;br /&gt;
 $ xcalib &amp;lt;profilname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
(und ein bisschen Theorie zu den konfigurierbaren Parametern:&lt;br /&gt;
&lt;br /&gt;
https://displaycal.net/#colorimeter-corrections )&lt;br /&gt;
&lt;br /&gt;
== ffmpeg ==&lt;br /&gt;
&#039;&#039;&#039;Das&#039;&#039;&#039; Tool zum Filme (und Sound) bearbeiten. Hier ein paar Beispiele:&lt;br /&gt;
&lt;br /&gt;
* Film um 180° drehen:&lt;br /&gt;
 $ ffmpeg -i input.mp4 -vf &amp;quot;transpose=2,transpose=2&amp;quot; output.mp4&lt;br /&gt;
&lt;br /&gt;
* Sound entfernen:&lt;br /&gt;
 $ ffmpeg -i MVI_1747.MOV -vcodec copy -an MVI_1747_mute.MOV&lt;br /&gt;
* Sound extrahieren (als mp3 abspeichern):&lt;br /&gt;
 # ffmpeg -i MVI_1751.MOV MVI_1751.mp3&lt;br /&gt;
* Lautstärke verändern (z.B. 1.5fach)&lt;br /&gt;
 # ffmpeg -i input.wav -af &amp;quot;volume=1.5&amp;quot; output.wav&lt;br /&gt;
&lt;br /&gt;
* Ab Position n den Rest abschneiden:&lt;br /&gt;
 #  ffmpeg -i MVI_1752.mp3 -to 00:00:27  MVI_1752_short.mp3&lt;br /&gt;
&lt;br /&gt;
* Ausschnitt rausschneiden und gleichzeitig als mp3 abspeichern&lt;br /&gt;
 # ffmpeg -i Buena_Vista_Social_Club.mp4 -ss 00:04:16 -to 00:09:16 bvsc2.mp3&lt;br /&gt;
&lt;br /&gt;
* Audio zu Video zufügen:&lt;br /&gt;
 # ffmpeg -i MVI_1747_mute.MOV -i MVI_1752a.mp3 -codec copy -shortest MVI_1747_sound.MOV&lt;br /&gt;
&lt;br /&gt;
* 2 Sounddateien mergen:&lt;br /&gt;
 #  ffmpeg -i &amp;quot;concat:MVI_1751.mp3|MVI_1753.mp3&amp;quot; -c copy MVI_1752.mp3&lt;br /&gt;
(Bei größeren Dateimengen ist sox eine gute Alternative:&amp;lt;br&amp;gt;&lt;br /&gt;
 # sox file1.wav file2.wav ... filen.wav newfile.wav&lt;br /&gt;
)&lt;br /&gt;
* 2 Videodateien mergen&lt;br /&gt;
Falls auf Grund unterschiedlicher Codierung sich das obige Beispiel nicht aus Videos übertragen lässt:&lt;br /&gt;
 $ ffmpeg -ifile1.mp4 -i file2.mp4 -filter_complex &amp;quot;[0:v][0:a][1:v][1:a] concat=n=2:v=1:a=1 [outv] [outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; out.mp4&lt;br /&gt;
&lt;br /&gt;
Beispiel für das Zusammenmergen von 3 Videos incl. Sound:&lt;br /&gt;
 $ ffmpeg -i VID_20241225_part1.mp4 -i VID_20241225_part2.mp4 -i VID_20241225_142603~2.mp4 -filter_complex &amp;quot;[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0] concat=n=3 :v=1:a=1[outv][outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; output.mp4&lt;br /&gt;
&lt;br /&gt;
Beispiel für das Zusammenmergen von 4 Videos zu 1 großen:&lt;br /&gt;
 $ ffmpeg -i part1.mp4 -i part2.mp4 -i MVI_0309.MP4 -i MVI_0310.MP4 -filter_complex &amp;quot;[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0][3:v:0][3:a:0]concat=n=4:v=1:a=1[outv][outa]&amp;quot; -map &amp;quot;[outv]&amp;quot; -map &amp;quot;[outa]&amp;quot; bigmovie.mp4&lt;br /&gt;
&lt;br /&gt;
Alternative:&amp;lt;br&amp;gt;&lt;br /&gt;
Datei erstellen mit dem Inhalt der Videodateien:&lt;br /&gt;
 cat &amp;gt; myfile.txt&amp;lt;&amp;lt;EOF&lt;br /&gt;
 file &#039;MVI_1547.mp4&#039;&lt;br /&gt;
 file &#039;MVI_1610.mp4&#039;&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -f concat -i myfile.txt -c copy output.mp4&lt;br /&gt;
&lt;br /&gt;
Sollte das Video nicht sauber durchlaufen, mit avidemux öffnen und die Schnittstelle rausschneiden.&lt;br /&gt;
&lt;br /&gt;
(wer noch immer nicht genug hat: https://trac.ffmpeg.org/wiki/Concatenate )&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Beispiel für 2 Videodateien zusammenmergen:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 &lt;br /&gt;
 F1=MVI_1869.MOV &lt;br /&gt;
 F2=MVI_1870.MOV&lt;br /&gt;
 RESULT=MVI_1871.mp4&lt;br /&gt;
 &lt;br /&gt;
 # sound&lt;br /&gt;
 #ffmpeg -i $F1 intermediate.mp3&lt;br /&gt;
 #ffmpeg -i $F2 intermediate.mp3&lt;br /&gt;
 &lt;br /&gt;
 # movie ohne sound&lt;br /&gt;
 # ffmpeg -i $F1 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate1.ts&lt;br /&gt;
 # ffmpeg -i $F2 -c copy -bsf:v h264_mp4toannexb -f mpegts intermediate2.ts&lt;br /&gt;
 # ffmpeg -f mpegts -i &amp;quot;concat:intermediate1.ts|intermediate2.ts&amp;quot; -c copy -bsf:a aac_adtstoasc $RESULT &lt;br /&gt;
 # rm intermediate{1,2}.ts&lt;br /&gt;
 &lt;br /&gt;
 # movie mit Sound&lt;br /&gt;
 ffmpeg -i $F1 -i $F2 \&lt;br /&gt;
  -filter_complex &amp;quot;[0:v:0] [0:a:0] [1:v:0] [1:a:0] concat=n=2:v=1:a=1 [v] [a]&amp;quot; \&lt;br /&gt;
  -map &amp;quot;[v]&amp;quot; -map &amp;quot;[a]&amp;quot; $RESULT&lt;br /&gt;
&lt;br /&gt;
* Zeitraffer/Zeitlupe&lt;br /&gt;
&lt;br /&gt;
Beispiel doppelte Geschwindigkeit:&lt;br /&gt;
 ffmpeg -i input.mp4 -filter:v &amp;quot;setpts=0.5*PTS&amp;quot; output.mp4&lt;br /&gt;
&lt;br /&gt;
https://medium.com/@sekhar.rahul/creating-a-time-lapse-video-on-the-command-line-with-ffmpeg-1a7566caf877&lt;br /&gt;
&lt;br /&gt;
* Untertitel zufügen&lt;br /&gt;
&lt;br /&gt;
Wenn die Untertitel als srt Datei vorliegen , muss man sie erst ins ass Format umwandeln:&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -i subtitles.srt subtitles.ass&lt;br /&gt;
&lt;br /&gt;
[[Subtitles_example.srt | Hier]] gibt es ein Beispiel für ein .srt file im Format&lt;br /&gt;
 &amp;lt;fortlaufende nr.&amp;gt;&lt;br /&gt;
 &amp;lt;time range HH:MM:SS,mmm --&amp;gt; HH:MM:SS,mmm&amp;gt;&lt;br /&gt;
 &amp;lt;text&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Video mit Untertiteln erzeugen:&lt;br /&gt;
&lt;br /&gt;
 $ ffmpeg -i input.mp4 -vf ass=subtitles.ass output.mp4&lt;br /&gt;
&lt;br /&gt;
Wer immer noch nicht genug hat:&lt;br /&gt;
&lt;br /&gt;
https://img.ly/blog/ultimate-guide-to-ffmpeg/&lt;br /&gt;
&lt;br /&gt;
==Soundtest mit arecord ==&lt;br /&gt;
Manchmal ist es hilfreich, Headset und Lautsprecher zu testen. So geht das:&lt;br /&gt;
&lt;br /&gt;
 $ arecord -f S16_LE -r 3000 | aplay -vvv&lt;br /&gt;
&lt;br /&gt;
== avidemux ==&lt;br /&gt;
Nettes Videoschnitttool. Leider nicht mehr als Debianpaket verfügbar. Kann aber hier heruntergeladen werden:&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.fosshub.com/Avidemux.html&lt;br /&gt;
&lt;br /&gt;
Am besten das Appimage herunterladen und verlinken z.B.&lt;br /&gt;
 # ln -s /usr/local/bin/3rdparty/avidemux_2.8.1.appImage /usr/local/bin/avidemux&lt;br /&gt;
&lt;br /&gt;
== Bildbetrachter ==&lt;br /&gt;
&lt;br /&gt;
=== irfanview ===&lt;br /&gt;
https://www.irfanview.net/faq.htm&lt;br /&gt;
&lt;br /&gt;
Gibt es generisch nur für Windows. Lässt sich aber auf (mindestens) 2 Arten auch für Linux installieren:&lt;br /&gt;
&lt;br /&gt;
==== 1. Snap ====&lt;br /&gt;
https://snapcraft.io/install/irfanview/debian&lt;br /&gt;
 $ sudo apt install snapd&lt;br /&gt;
 $ sudo snap install irfanview&lt;br /&gt;
 $ type -a irfanview&lt;br /&gt;
&amp;lt;i&amp;gt;irfanview is /snap/bin/irfanview&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== 2. Wine ====&lt;br /&gt;
Unbedingt darauf achten, die 64bit-Version herunterzuladen!&lt;br /&gt;
 $ wine iview460_x64_setup.exe&lt;br /&gt;
Danach kommt ein Windows-Installer-Fenster.&lt;br /&gt;
 $ type -a irfanview&lt;br /&gt;
&amp;lt;i&amp;gt;irfanview is aliased to `wine /home/kathrin/.wine/drive_c/Program\ Files/IrfanView/i_view64.exe&#039;&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== gwenview ===&lt;br /&gt;
Hat ein paar nette Eigenschaften, z.B. mehrere Bilder gleichzeitig anzeigen lassen zum Vergleich. Kommt mit KDE oder als gleichnamiges Paket.&lt;br /&gt;
&lt;br /&gt;
== .wav Dateien mit xine abspielen ==&lt;br /&gt;
Wenn eine Fehlermeldung missing plugin kommt...&lt;br /&gt;
&lt;br /&gt;
* libxine2-misc-plugins installieren&lt;br /&gt;
&lt;br /&gt;
==Image::Magick==&lt;br /&gt;
Perlmodul, mit dem man Massenbearbeitung von Bildern (verkleinern, vergrößern,&lt;br /&gt;
...) durchführen kann.&amp;lt;br&amp;gt;&lt;br /&gt;
Beispielscript  [[resizeXpercent.pl]]: &amp;lt;br&amp;gt;&lt;br /&gt;
Alle Bilder eines bestimmten Ordners werden um X Prozent verkleinert und in&lt;br /&gt;
einen anderen Ordner gespeichert.&lt;br /&gt;
&lt;br /&gt;
==Image Magick mit jpeg-Support benutzen==&lt;br /&gt;
libjpeg-dev installieren&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=627</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=627"/>
		<updated>2025-07-04T08:40:56Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Create new app */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red &#039;&#039;&#039;--template&#039;&#039;&#039;=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue &#039;&#039;&#039;--image&#039;&#039;&#039; registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
Create service:&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; --port 8080 --target-port 8080&lt;br /&gt;
Create route:&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
Alernative ingress:&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create service from deployment ==&lt;br /&gt;
 $ oc expose deployment/helloworld&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
	<entry>
		<id>https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=626</id>
		<title>OpenShift Cheatsheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nomorebluescreen.de/index.php?title=OpenShift_Cheatsheet&amp;diff=626"/>
		<updated>2025-05-28T09:15:20Z</updated>

		<summary type="html">&lt;p&gt;Sunflower: /* Pods */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
Here some helpful OpenShift commands which work (at least) since version &amp;gt;= 4.11&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&#039;&#039;&#039;How to get a token:&#039;&#039;&#039;&lt;br /&gt;
https://oauth-openshift.apps.ocp.example.com/oauth/token/display&lt;br /&gt;
&lt;br /&gt;
You might need it for login or automatization.&lt;br /&gt;
 $ oc login --token=... --server=https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
Use the token directly against the API:&lt;br /&gt;
 $ curl -H &amp;quot;Authorization: Bearer $TOKEN&amp;quot; https://api.ocp.example.com:6443/apis/user.openshift.io/v1/users/~&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Login with username/password:&#039;&#039;&#039;&lt;br /&gt;
 $ oc login -u admin -p password https://api.ocp.example.com:6443&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Get console URL:&#039;&#039;&#039;&lt;br /&gt;
 $ oc whoami --show-console&lt;br /&gt;
&lt;br /&gt;
= CLI tool =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Enable autocompletion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
oc completion bash &amp;gt; /etc/profile.d/oc_completion_bash.sh&lt;br /&gt;
&lt;br /&gt;
= Registries =&lt;br /&gt;
* registry.access.redhat.com (login only)&lt;br /&gt;
* registry.redhat.io&lt;br /&gt;
* quay.io&lt;br /&gt;
&lt;br /&gt;
= Creating =&lt;br /&gt;
&lt;br /&gt;
 $ skopeo login -u user -p password registry.redhat.io&lt;br /&gt;
 $ skopeo list-tags docker://docker.io/nginx&lt;br /&gt;
 $ oc run &amp;lt;mypod-nginx&amp;gt; --image docker://docker.io/nginx:stable-alpine (--env NGINX_VERSION=1.24.1)&lt;br /&gt;
&lt;br /&gt;
 $ skopeo inspect (--config) docker://registry.redhat.io/rhel8/httpd-24&lt;br /&gt;
&lt;br /&gt;
Search Images by help of podman:&lt;br /&gt;
 $ podman search &amp;lt;wordpress&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create new app ==&lt;br /&gt;
with label and parameters&lt;br /&gt;
&lt;br /&gt;
from template&lt;br /&gt;
 $ oc new-app (--name mysql-server) -l team=red --template=mysql-persistent -p MYSQL_USER=developer -p MYSQL_PASSWORD=topsecret&lt;br /&gt;
&lt;br /&gt;
from image&lt;br /&gt;
 $ oc new-app -l team=blue --image registry.redhat.com/rhel9/mysql-80:1 -e MYSQL_ROOT_PASSWORD=redhat -e MYSQL_USER=developer -e MYSQL_PASSWORD=evenmoresecret&lt;br /&gt;
&lt;br /&gt;
=== Set environment variables afterwards ===&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_DATABASE=wikidb&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_USER=mediawiki&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_PASSWORD=wikitopsecret&lt;br /&gt;
 oc set env deployment/mariadb MARIADB_ROOT_PASSWORD=gehheim&lt;br /&gt;
&lt;br /&gt;
(Not recommended for passwords; you&#039;d better set secrets and configmaps, s. below)&lt;br /&gt;
&lt;br /&gt;
== Make new app available ==&lt;br /&gt;
Create service:&lt;br /&gt;
 $ oc expose deployment &amp;lt;mydeployment&amp;gt; --name &amp;lt;service-mynewapp&amp;gt; --port 8080 --target-port 8080&lt;br /&gt;
Create route:&lt;br /&gt;
 $ oc expose service &amp;lt;service-mynewapp&amp;gt; --name &amp;lt;route-to-mynewapp&amp;gt;&lt;br /&gt;
Afterwards the app is reachable from outside.&lt;br /&gt;
Alernative ingress:&lt;br /&gt;
 $ oc create ingress &amp;lt;ingress-mynewapp&amp;gt; --rule=&amp;quot;mynewapp.ocp4.example.de/*=service-mynewapp:8080&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Create Deployment from image ==&lt;br /&gt;
 $ oc create deployment demo-pod --port 3306  --image registry.ocp.example.de:8443/rhel9/mysql-80&lt;br /&gt;
&lt;br /&gt;
=== Problem web server ===&lt;br /&gt;
In some images web servers run on port 80 which leads to permission problems in OpenShift as security context constraints do not allow to run apps on privileged ports&lt;br /&gt;
&lt;br /&gt;
Error message:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; either choose an image where port &amp;gt;= 1024 is used&lt;br /&gt;
&amp;lt;br&amp;gt;-&amp;gt; or add permissions to the corresponding service account&lt;br /&gt;
&lt;br /&gt;
 $ oc get pod &amp;lt;your pod name&amp;gt; -o yaml | grep -i serviceAccountName&lt;br /&gt;
    serviceAccountName: default&lt;br /&gt;
&lt;br /&gt;
 $ oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
(when you want to get rid of this setting again you have to edit the annotations field of the deployment and re-create the pod)&lt;br /&gt;
&lt;br /&gt;
 $ oc delete pod &amp;lt;your pod name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create Job from image ==&lt;br /&gt;
 $ oc create job testjob --image registry.ocp.example.de:8443/rhel9/mysql-80 -- /bin/bash -c &amp;quot;create database events; mysql events -e &#039;source /tmp/dump.sql;&#039;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cronjob:&lt;br /&gt;
 $oc create cronjob mynewjob --image registry.ocp4.example.de:8443/ubi8/ubi:latest --schedule=&#039;* * * * 5&#039; -- /bin/bash -c &amp;quot;if [ $(date +%H) -gt 15 ]; then echo &#039;Hands up, weekend!&#039;; fi&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Check output of job:&lt;br /&gt;
 $ oc logs job/&amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create service from deployment ==&lt;br /&gt;
 $ oc expose deployment/helloworld&lt;br /&gt;
&lt;br /&gt;
== Create Secret from String ==&lt;br /&gt;
 $ oc create secret generic test --from-literal=foo=bar&lt;br /&gt;
&lt;br /&gt;
= Watching =&lt;br /&gt;
&lt;br /&gt;
== Common info ==&lt;br /&gt;
General cluster/resource info:&lt;br /&gt;
 $ oc cluster-info&lt;br /&gt;
&lt;br /&gt;
Which resources are there?&lt;br /&gt;
 $ oc api-resources (--namespaced=false)(--api-group=config.openshift.io)(00api-group=&#039;&#039;)&lt;br /&gt;
                  (in|without namespace)(openshift specific)(core-api-group only)&lt;br /&gt;
&lt;br /&gt;
Explain resources:&lt;br /&gt;
 $ oc explain service&lt;br /&gt;
&lt;br /&gt;
Describe resources:&lt;br /&gt;
 $ oc describe service&lt;br /&gt;
&lt;br /&gt;
Inspect resources:&lt;br /&gt;
 $ oc adm inspect deployment XYZ --dest-dir /home/student/inspection&lt;br /&gt;
(Attention: control resulting files for secrets, passwords, privatekeys etc. before sending somewhere)&lt;br /&gt;
&lt;br /&gt;
Get all resources:&lt;br /&gt;
 $ oc get all&lt;br /&gt;
&lt;br /&gt;
(&#039;&#039;&#039;Attention:&#039;&#039;&#039; templates, secrets, configmaps and pvcs will be shown outside resources)&lt;br /&gt;
&lt;br /&gt;
 $ oc get template,secret,cm,pvc&lt;br /&gt;
&lt;br /&gt;
List resources in context of another user/serviceaccount:&lt;br /&gt;
 $ oc get persistentvolumeclaims -n openshift-monitoring --as=system:serviceaccount:openshift-monitoring:default&lt;br /&gt;
&lt;br /&gt;
== Resources which are not shown with the &amp;quot;oc get all&amp;quot; command ==&lt;br /&gt;
 $ oc api-resources --verbs=list --namespaced -o name | xargs -n 1 oc get --show-kind --ignore-not-found -n mynamespace&lt;br /&gt;
&lt;br /&gt;
== Nodes ==&lt;br /&gt;
&lt;br /&gt;
Get status of all nodes:&lt;br /&gt;
 $ oc get nodes&lt;br /&gt;
&lt;br /&gt;
Get Logs of a node (and special unit)&lt;br /&gt;
 $ oc adm node-logs &amp;lt;nodename&amp;gt; -u crio&lt;br /&gt;
&lt;br /&gt;
Compare allocatable resources vs limits:&lt;br /&gt;
 $ oc get nodes &amp;lt;nodename&amp;gt; -o jsonpath=&#039;{&amp;quot;Allocatable:\n&amp;quot;}{.status.allocatable}{&amp;quot;\n\n&amp;quot;}{&amp;quot;Capacity:\n&amp;quot;}{.status.capacity}{&amp;quot;\n&amp;quot;}&#039;&lt;br /&gt;
&lt;br /&gt;
Get resource consumption:&lt;br /&gt;
 $ oc adm top nodes&lt;br /&gt;
&lt;br /&gt;
Be careful !&lt;br /&gt;
Only the free memory is shown, not the allocatable memory. For a more realistic presentation do:&lt;br /&gt;
 $ oc adm top nodes --show-capacity&lt;br /&gt;
&lt;br /&gt;
( https://www.redhat.com/en/blog/using-oc-adm-top-to-monitor-memory-usage )&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
&lt;br /&gt;
Show Uptime:&lt;br /&gt;
 $ oc get machines -A&lt;br /&gt;
&lt;br /&gt;
Get state paused/not paused of machineconfigpool:&lt;br /&gt;
 $ oc get mcp worker -o jsonpath=&#039;{.spec.paused}&#039;&lt;br /&gt;
&lt;br /&gt;
== Pods ==&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of all pods:&lt;br /&gt;
 $ oc adm top pods -A --sum&lt;br /&gt;
&lt;br /&gt;
Get resource consumption of pods and containers&lt;br /&gt;
 $ oc adm top pods -n &amp;lt;openshift-etcd&amp;gt; --containers&lt;br /&gt;
&lt;br /&gt;
Get all pods on a specific node:&lt;br /&gt;
 $ oc get pods --field-selector spec.nodeName=ocp-abcd1-worker-0 (-l myawesomelabel)&lt;br /&gt;
&lt;br /&gt;
Get only pods from deployment mysql:&lt;br /&gt;
 $ oc get pods -l deploymentconfig=mysql&lt;br /&gt;
&lt;br /&gt;
Get pods&#039; readinessProbe:&lt;br /&gt;
  $ oc get pods -o jsonpath=&#039;{item[0].spec.containers[0].readinessProbe}&#039; | jq&lt;br /&gt;
&lt;br /&gt;
Connect to pod and open a shell:&lt;br /&gt;
 $ oc exec -it &amp;lt;podname&amp;gt; -- /bin/bash&lt;br /&gt;
&lt;br /&gt;
Copy file(s) to pod:&lt;br /&gt;
 $ oc cp mysqldump.sql mysql-server:/tmp&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
&lt;br /&gt;
Sort Events by time:&lt;br /&gt;
 $ oc get events --sort-by=lastTimestamp&lt;br /&gt;
&lt;br /&gt;
Show egress IPs:&lt;br /&gt;
 $ oc get hostsubnets&lt;br /&gt;
&lt;br /&gt;
Show/edit initial configuration:&lt;br /&gt;
 $ oc get cm cluster-config-v1 -o yaml -n kube-system&lt;br /&gt;
   (edit)&lt;br /&gt;
&lt;br /&gt;
List alerts:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool alert --alertmanager.url=http://localhost:9093 -o extended&lt;br /&gt;
 List silences:&lt;br /&gt;
 $ oc -n openshift-monitoring exec -ti alertmanager-main-0 -c alertmanager -- amtool silence query [alertname=ClusterNotUpgradable] --alertmanager.url=http://localhost:9093&lt;br /&gt;
&lt;br /&gt;
https://cloud.redhat.com/blog/how-to-use-amtool-to-manage-red-hat-advanced-cluster-management-for-kubernetes-alerts&lt;br /&gt;
&lt;br /&gt;
User rights to resources:&amp;lt;br&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can &amp;lt;verb&amp;gt; &amp;lt;resource&amp;gt;&lt;br /&gt;
 $ oc adm policy who-can patch machineconfigs&lt;br /&gt;
&lt;br /&gt;
= Running =&lt;br /&gt;
&lt;br /&gt;
== Projects/Namespaces ==&lt;br /&gt;
&lt;br /&gt;
Switch &#039;&#039;&#039;namespace&#039;&#039;&#039;:&lt;br /&gt;
 $ oc project &amp;lt;namespace&amp;gt;&lt;br /&gt;
quit namespace:&lt;br /&gt;
 $ oc project -n default&lt;br /&gt;
&lt;br /&gt;
== Change resources ==&lt;br /&gt;
&lt;br /&gt;
=== Environment variables ===&lt;br /&gt;
Set environment variables on running deployment:&lt;br /&gt;
 $ oc set env deployment/helloworld MYSQL_USER=user1 MYSQL_PASSWORD=f00bar MYSQL_DATABASE=testdb&lt;br /&gt;
&lt;br /&gt;
=== Change with &#039;&#039;&#039;patch&#039;&#039;&#039; command ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch single value of resource:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch installplan install-defgh -n openshift-operators-redhat --type merge  --patch &#039;{&amp;quot;spec&amp;quot;:{&amp;quot;approved&amp;quot;:true}}&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch resource by help of a file:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge mc 99-worker-ssh --patch-file=/tmp/patch_mc-worker-ssh.yaml&lt;br /&gt;
&lt;br /&gt;
Content of &#039;&#039;patch_mc-worker-ssh.yaml&#039;&#039;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  config:&lt;br /&gt;
    passwd:&lt;br /&gt;
      users:&lt;br /&gt;
      - name: core&lt;br /&gt;
        sshAuthorizedKeys:&lt;br /&gt;
        - |&lt;br /&gt;
          ssh-rsa AAAAB3NzaZ1yc2EAAAADAQABAAABAQDOMsVGOvN3ap+MWr7eqZpBfDLTcmFdKhozJGStwXsTrP6QJYlxwP1ITZH7tPMfD0zkHu+y7XzcPqybwmnK4hPhuzxUl4qXqdTkTUUJjy3eVPk7n3RHHdsI2yS5YnlcySnTvkYAOuMStDDhN1MF6xOwxqXOq6xalzZzt7j/MtcceHxIdB19i0Fp4XYRTfv9p3UTFFkP9DoRnspNI0TtIg8YfzYcHJy/bDhEfi6+t0UBcksUqVWpVY2jX2Nco1qfC+/E2ooWalMzYUsB4ctU4OqiLd5qxmMevn9J+knPVhiWLE41d7dReVHkNyao2HZUH1r6E6B7/n/m0+XS0qJeA0Hh testy@pc01&lt;br /&gt;
          ssh-rsa AAABBBCCC0815....QWertzu007Xx foobar@pc02&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Attention: former content of &#039;&#039;&#039;sshAuthorizedKeys&#039;&#039;&#039; will be overwritten !&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Patch secret with base64 encoded data:&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;Create yaml file with content:&lt;br /&gt;
 $ head /tmp/alertmanager.yaml&lt;br /&gt;
 global:&lt;br /&gt;
  resolve_timeout: 5m&lt;br /&gt;
  smtp_from: openshift-admin@example.de&lt;br /&gt;
  smtp_smarthost: &#039;loghorst.example.de:25&#039;&lt;br /&gt;
  smtp_hello: localhost&lt;br /&gt;
  (...)&lt;br /&gt;
 $ tail /tmp/alertmanager.yaml&lt;br /&gt;
 (...)&lt;br /&gt;
 time_intervals:&lt;br /&gt;
  - name: work_hours&lt;br /&gt;
    time_intervals:&lt;br /&gt;
      - weekdays: [&amp;quot;monday:friday&amp;quot;]&lt;br /&gt;
        times:&lt;br /&gt;
          - start_time: &amp;quot;07:00&amp;quot;&lt;br /&gt;
            end_time: &amp;quot;17:00&amp;quot;&lt;br /&gt;
        location: Europe/Zurich&lt;br /&gt;
&lt;br /&gt;
 $ oc patch secret alertmanager-main -p &#039;{&amp;quot;data&amp;quot;: {&amp;quot;config.yaml&amp;quot;: &amp;quot;&#039;$(base64 -w0 /tmp/alertmanager.yaml)&#039;&amp;quot;}}&#039;&lt;br /&gt;
&lt;br /&gt;
==== Examples ====&lt;br /&gt;
&#039;&#039;&#039;Set master/worker to (un)paused:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;paused&amp;quot;:false}}&#039; machineconfigpool/{master,worker}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Set maximum number of unavailable workers to 2:&#039;&#039;&#039;&lt;br /&gt;
 $ oc patch --type=merge --patch=&#039;{&amp;quot;spec&amp;quot;:{&amp;quot;maxUnavailable&amp;quot;:2}}&#039; machineconfigpool/worker&lt;br /&gt;
(default=1)&lt;br /&gt;
&lt;br /&gt;
=== Restart deployment after change ===&lt;br /&gt;
&lt;br /&gt;
 $ oc rollout restart deployment testdeploy&lt;br /&gt;
&lt;br /&gt;
(obsolete:&lt;br /&gt;
&amp;lt;br&amp;gt;the deployment resource has no rollout option -&amp;gt; You must patch something before it restarts e.g.:&lt;br /&gt;
 $ oc patch deployment testdeploy --patch &amp;quot;{\&amp;quot;spec\&amp;quot;:{\&amp;quot;template\&amp;quot;:{\&amp;quot;metadata\&amp;quot;:{\&amp;quot;annotations\&amp;quot;:{\&amp;quot;last-restart\&amp;quot;:\&amp;quot;`date +&#039;%s&#039;`\&amp;quot;}}}}}&amp;quot;&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
=== Scaling resources ===&lt;br /&gt;
Scale number of machines/nodes up/down:&lt;br /&gt;
 $ oc scale --replicas=2 machineset &amp;lt;machineset&amp;gt; -n openshift-machine-api&lt;br /&gt;
&lt;br /&gt;
=== Draining nodes ===&lt;br /&gt;
Empty node and put it into maintenance mode (e.g. before booting)&lt;br /&gt;
 $ oc adm cordon &amp;lt;node1&amp;gt; (not necessary wgen you drain it - will be emptied anyway)&lt;br /&gt;
 $ oc adm drain &amp;lt;node1&amp;gt; --delete-emptydir-data=true --ignore-daemonsets=true&lt;br /&gt;
&lt;br /&gt;
After reboot:&lt;br /&gt;
 $ oc adm uncordon &amp;lt;node1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Logging =&lt;br /&gt;
&lt;br /&gt;
Watch logs of a certain pod (or container)&lt;br /&gt;
 $ oc logs &amp;lt;podname&amp;gt; (-c &amp;lt;container&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
Debug pod (e.g. if crashloopbacked):&lt;br /&gt;
 $ oc debug pod/&amp;lt;podname&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Node logs of systemunit crio:&lt;br /&gt;
 $ oc adm node-logs master01 -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
The same of all masters:&lt;br /&gt;
 $ oc adm node-logs --role master -u crio --tail 2&lt;br /&gt;
&lt;br /&gt;
Liveness/Readiness Probes of all pods in certain timestamp:&lt;br /&gt;
 $ oc adm node-logs --role worker -u kubelet | egrep -E &#039;Liveness|Readiness&#039; | grep &amp;quot;Aug 21 11:22&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Space allocation of logging:&lt;br /&gt;
 $ POD=elasticsearch-cdm-&amp;lt;ID&amp;gt;&lt;br /&gt;
 $ oc -n openshift-logging exec $POD -c elasticsearch -- es_util --query=_cat/allocation?v\&amp;amp;pretty=true&lt;br /&gt;
&lt;br /&gt;
Watch audit logs:&lt;br /&gt;
 $ oc adm node-logs --role=master --path=openshift-apiserver/&lt;br /&gt;
&lt;br /&gt;
Watch audit.log from certain node:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log&lt;br /&gt;
&lt;br /&gt;
Search string:&lt;br /&gt;
 $ oc adm node-logs ocp-abcdf-master-0 --path=openshift-apiserver/audit-2023-09-26T14-11-04.448.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot;)&#039;&lt;br /&gt;
 $ oc adm node-logs ocp-46578-master-1 --path=openshift-apiserver/audit.log | jq &#039;select(.verb == &amp;quot;delete&amp;quot; and .objectRef.resource != &amp;quot;routes&amp;quot; and .objectRef.resource != &amp;quot;templateinstances&amp;quot; and .objectRef.resource != &amp;quot;rolebindings&amp;quot; )&#039; &lt;br /&gt;
&lt;br /&gt;
Source:&amp;lt;br&amp;gt;&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/security/audit-log-view.html&lt;br /&gt;
&lt;br /&gt;
= Information gathering =&lt;br /&gt;
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/support/gathering-cluster-data#support_gathering_data_gathering-cluster-data&lt;br /&gt;
&lt;br /&gt;
== Must-gather ==&lt;br /&gt;
 $ oc adm must-gather&lt;br /&gt;
-&amp;gt; create must-gather.local.XXXXXX&lt;br /&gt;
&lt;br /&gt;
https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect&lt;br /&gt;
(evtl. delete secrets!)&lt;br /&gt;
&lt;br /&gt;
== SOS Report ==&lt;br /&gt;
https://access.redhat.com/solutions/4387261&lt;br /&gt;
&lt;br /&gt;
== Inspect ==&lt;br /&gt;
Get information resource-wise and for a certain period:&lt;br /&gt;
 $ oc adm inspect clusteroperator/kube-apiserver --dest-dir /tmp/kube-apiserver --since 1m&lt;br /&gt;
&lt;br /&gt;
= Special cases =&lt;br /&gt;
&lt;br /&gt;
== Namespace not deletable ==&lt;br /&gt;
Namespace gets stuck in status terminating&lt;br /&gt;
&lt;br /&gt;
Watch out for secrets that are left over and not deletable.&lt;br /&gt;
Set the finalizer to Null:&lt;br /&gt;
 $ oc patch secrets $SECRET -n ocp-cluster-iam-entw  -p &#039;{&amp;quot;metadata&amp;quot;:{&amp;quot;finalizers&amp;quot;:[]}}&#039; --type=merge&lt;br /&gt;
&lt;br /&gt;
== Run containers as root ==&lt;br /&gt;
Should only be done as last instance or for temporary tests as attackers could theoretically break out of the containers and become root on the system.&lt;br /&gt;
&lt;br /&gt;
In the deployment add following lines under the &amp;quot;spec&amp;quot; statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
    securityContext:&lt;br /&gt;
      runAsUser: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You must give admin privileges to the serviceaccount under which the deployment runs. If nothing is configured it is normally the default user:&lt;br /&gt;
 # oc project &amp;lt;myproject&amp;gt;&lt;br /&gt;
 # oc adm policy add-scc-to-user anyuid -z default&lt;br /&gt;
&lt;br /&gt;
= App URLs =&lt;br /&gt;
== Kibana ==&lt;br /&gt;
https://kibana-openshift-logging.apps.ocp.example.com/&lt;br /&gt;
&lt;br /&gt;
== ArgoCD ==&lt;br /&gt;
https://openshift-gitops-server-openshift-gitops.apps.ocp.example.com&lt;br /&gt;
&lt;br /&gt;
= Useful terms =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IPI&#039;&#039;&#039;       Installer-provisioned infrastructure cluster&amp;lt;br&amp;gt;&lt;br /&gt;
Cluster installed by install command; user must only provide some information (which platform, cluster name, network, storage, ...)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;UPI&#039;&#039;&#039;     User provisioned infrastructure cluster&lt;br /&gt;
* DNS and Loadbalancing must already be there&lt;br /&gt;
* Installation manually, download ova file (in case of vSphere)&lt;br /&gt;
* master created manually&lt;br /&gt;
* workers recommended&lt;br /&gt;
* *no* keepalived&lt;br /&gt;
&lt;br /&gt;
Advantages:&amp;lt;br&amp;gt;&lt;br /&gt;
IPI: installation more simple, using preconfigured features&amp;lt;br&amp;gt;&lt;br /&gt;
UPI: more flexibility, no loadbalancer outage during update&lt;br /&gt;
&lt;br /&gt;
Change from IPI -&amp;gt; UPI not possible&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can get more shortcuts by typing:&lt;br /&gt;
 $ oc api-resources&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| cm || config map&lt;br /&gt;
|-&lt;br /&gt;
| csv || cluster service version&lt;br /&gt;
|-&lt;br /&gt;
| dc || deploymentconfig&lt;br /&gt;
|-&lt;br /&gt;
| ds || daemonset&lt;br /&gt;
|-&lt;br /&gt;
| ip ||installplan&lt;br /&gt;
|-&lt;br /&gt;
| mcp || machineconfigpool&lt;br /&gt;
|-&lt;br /&gt;
| pv || persistent volume&lt;br /&gt;
|-&lt;br /&gt;
| sa || service account&lt;br /&gt;
|-&lt;br /&gt;
| scc || security context constraints&lt;br /&gt;
|-&lt;br /&gt;
| svc || service&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sunflower</name></author>
	</entry>
</feed>