Lab 2: Security Tooling
Polaris
Polaris runs a variety of checks to ensure that Kubernetes pods and controllers are configured using best practices, helping you avoid problems in the future.
You can get more details here.
-
Install Polaris Dashboard by running:
kubectl apply -f ~/training/tools/polaris.yaml > namespace/polaris created > configmap/polaris created > serviceaccount/polaris-dashboard created > clusterrole.rbac.authorization.k8s.io/polaris-dashboard created > clusterrolebinding.rbac.authorization.k8s.io/polaris-dashboard created > service/polaris-dashboard created > deployment.apps/polaris-dashboard created
``
-
Wait until the pod si running:
kubectl get pods -n polaris > NAME READY STATUS RESTARTS AGE > polaris-dashboard-69f5bc4b5d-8jz24 1/1 Running 0 66s
``
-
Once the status reads
Running
, we need to expose the Dashboard as a service so we can access it:kubectl expose deployment polaris-dashboard -n polaris --name polaris-dashboard-service --type="NodePort" --port=8080 > service/polaris-dashboard-service exposed
-
The Polaris Dashboard is now running in your cluster, and exposed to the internet. You can open it by typing:
Training VM
If you are using the training VM, you can open Polaris directly by typing:
minikube service polaris-dashboard-service -n polaris
Cloud/Standalone
If you are NOT using the training VM, you can open the webpage by forwarding a local port to the service in your Cluster.
Execute this in a separate Terminal Window/Tab in order to be able to access the web page:
kubectl port-forward --namespace polaris $(kubectl get po -n polaris | grep polaris-dashboard | \awk '{print $1;}') 8080:8080
And then navigating to http://localhost:8080/ in your browser.
-
Look around the Dashboard to get familiar with the checks.
-
Let’s deploy a version of
k8sdemo
that has some more problems by running:kubectl create -f ~/training/deployment/demoapp-errors.yaml
``
This action will take a bit of time. To check the status of the running application, you can use
kubectl get pods
. -
Check out the dashboard for the
k8sdemo-nok
application and you will find that there are a lot more warnings for this deployment.
-
Clean-up by running:
kubectl delete -f ~/training/deployment/demoapp-errors.yaml kubectl delete -f ~/training/tools/polaris.yaml
``
Now on to the next tool…
Kube Hunter
Kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments.
IMPORTANT!!! You should NOT run kube-hunter on a Kubernetes cluster you don’t own!
You can get more details here.
Training VM
If you are using the training VM,
kubehunter
has already been installed in your environment.
Cloud/Standalone
If you are NOT using the training VM, you have to install
kubehunter
as described here: https://github.com/aquasecurity/kube-hunter#deployment
-
Let’s examine the list of passive test (non intrusive, aka that do not change the cluster state) that kube-hunter runs:
~/kube-hunter/kube-hunter.py --list > Passive Hunters: > ---------------- > * Mount Hunter - /var/log > Hunt pods that have write access to host's /var/log. in such case, the pod can traverse read files on the host machine > > * Host Discovery when running as pod > Generates ip adresses to scan, based on cluster/scan type > > * API Server Hunter > Checks if API server is accessible > > * K8s CVE Hunter > Checks if Node is running a Kubernetes version vulnerable to specific important CVEs > > * Proxy Discovery > Checks for the existence of a an open Proxy service > > * Pod Capabilities Hunter > Checks for default enabled capabilities in a pod > > * Kubectl CVE Hunter > Checks if the kubectl client is vulnerable to specific important CVEs ...
``
-
Let’s examine the list of passive test (non intrusive, aka that do not change the cluster state) that kube-hunter runs:
~/kube-hunter/kube-hunter.py --list --active > Passive Hunters: > ---------------- ... > Active Hunters: > --------------- > * Kubelet System Logs Hunter > Retrieves commands from host's system audit > > * Etcd Remote Access > Checks for remote write access to etcd- will attempt to add a new key to the etcd DB > > * Azure SPN Hunter > Gets the azure subscription file on the host by executing inside a container > > * Kubelet Container Logs Hunter > Retrieves logs from a random container > > * Kubelet Run Hunter > Executes uname inside of a random container > ...
``
-
Get the Cluster IP (MY-CLUSTER-IP)
Use the adequate method for this:
Minikube in the training VM
minikube ip
`` or
IBM Cloud Kubernetes Service
ibmcloud ks worker ls --cluster mycluster-free
``
-
Now let’s run an active and passive test against our minikube cluster::
~/kube-hunter/kube-hunter.py --remote <MY-CLUSTER-IP> --active > ~ Started > ~ Discovering Open Kubernetes Services... > | > | Etcd: > | type: open service > | service: Etcd > |_ location: localhost:2379 > | > | Kubelet API (readonly): > | type: open service > | service: Kubelet API (readonly) > |_ location: localhost:10255 ...
``
Findings
You should get no findings, meaning that the Minikube instance has been correctly configured.
If you get vulnerabilities like the following, this might be due to the fact that minikube
API by default allows for access with user system:anonymous
.
+-----------------+----------------------+----------------------+----------------------+----------+
| LOCATION | CATEGORY | VULNERABILITY | DESCRIPTION | EVIDENCE |
+-----------------+----------------------+----------------------+----------------------+----------+
| localhost:10250 | Remote Code | Anonymous | The kubelet is | |
| | Execution | Authentication | misconfigured, | |
| | | | potentially allowing | |
| | | | secure access to all | |
| | | | requests on the | |
| | | | kubelet, without the | |
| | | | need to authenticate | |
+-----------------+----------------------+----------------------+----------------------+----------+
This should (hopefully!) not be the case in your clusters and in this case could be remediated by launching minikube
with the option --extra-config=apiserver.anonymous-auth=false
kubesec
kubesec is a utility that performs security risk analysis for Kubernetes resources and tells you what you should change in order to improve the security of those pods. It also gives you a score that you can use to create a minimum standard. The score incorporates a great number of Kubernetes best practices.
Training VM
If you are using the training VM,
KubeSec
has already been installed in your environment.
Cloud/Standalone
If you are NOT using the training VM, you have to install
KubeSec
as described here: https://github.com/controlplaneio/kubesec/releases
-
Launch a test against the demo application
kubesec scan ~/training/deployment/demoapp.yaml
``
The output is in JSON format that can easily be integrated into a CI/CD process.
[ { "object": "Deployment/k8sdemo.default", "valid": true, "message": "Passed with a score of 4 points", "score": 4, "scoring": { "advise": [ { "selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"", "reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY", "points": 3 }, { "selector": ".spec .serviceAccountName", "reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege", "points": 3 }, { "selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"", "reason": "Seccomp profiles set minimum privilege and secure against unknown threats", "points": 1 }, { "selector": "containers[] .securityContext .capabilities .drop", "reason": "Reducing kernel capabilities available to a container limits its attack surface", "points": 1 }, { "selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")", "reason": "Drop all capabilities and add only those required to reduce syscall attack surface", "points": 1 }, { "selector": "containers[] .securityContext .readOnlyRootFilesystem == true", "reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost", "points": 1 }, { "selector": "containers[] .securityContext .runAsNonRoot == true", "reason": "Force the running image to run as a non-root user to ensure least privilege", "points": 1 }, { "selector": "containers[] .securityContext .runAsUser -gt 10000", "reason": "Run as a high-UID user to avoid conflicts with the host's user table", "points": 1 } ] } } ]
``
-
Launch a test against a really vulnerable app
kubesec scan ~/training/kubesec/critical.yaml
``
The output is in JSON format that can easily be integrated into a CI/CD process.
[ { "object": "Pod/kubesec-test.default", "valid": true, "message": "Failed with a score of -37 points", "score": -37, "scoring": { "critical": [ { "selector": "containers[] .securityContext .privileged == true", "reason": "Privileged containers can allow almost completely unrestricted host access", "points": -30 }, { "selector": "containers[] .securityContext .allowPrivilegeEscalation == true", "reason": "", "points": -7 } ], "advise": [ { ....
``
kubesec gives us a simple tool to check the deployment manifests early on and integrate into a CI/CD process at build-time. KubeSec can run in different ways (commandline, Docker container and even as a Kubernetes admission hook) in order to facilitate that integration.
conftest
conftest is a utility to help you write tests against structured configuration data. For instance you could write tests for your Kubernetes configurations, or Tekton pipeline definitions, Terraform code, Serverless configs or any other structured data.
Training VM
If you are using the training VM,
ConfTest
has already been installed in your environment.
Cloud/Standalone
If you are NOT using the training VM, you have to install
ConfTest
as described here: https://www.conftest.dev/install/
- Launch a test against the demo application
conftest test -p ~/training/conftest/src/examples/kubernetes/policy ~/training/deployment/demoapp.yaml
> FAIL - /home/training/training/deployment/demoapp.yaml - Containers must not run as root in Deployment k8sdemo
> FAIL - /home/training/training/deployment/demoapp.yaml - Deployment k8sdemo must provide app/release labels for pod selectors
> FAIL - /home/training/training/deployment/demoapp.yaml - k8sdemo must include Kubernetes recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels
- Launch a test against all the files for the demo application
The output format here is set to TAP (Test Anything Protocol)
conftest test -p ~/training/conftest/src/examples/kubernetes/policy --output=tap ~/training/deployment/*.yaml
> 1..3
> not ok 1 - /home/training/training/deployment/demoapp-backend.yaml - Containers must not run as root in Deployment k8sdemo-backend
> not ok 2 - /home/training/training/deployment/demoapp-backend.yaml - Deployment k8sdemo-backend must provide app/release labels for pod selectors
> not ok 3 - /home/training/training/deployment/demoapp-backend.yaml - k8sdemo-backend must include Kubernetes recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels
> 1..3
> not ok 1 - /home/training/training/deployment/demoapp-errors.yaml - Containers must not run as root in Deployment k8sdemo-nok
> not ok 2 - /home/training/training/deployment/demoapp-errors.yaml - Deployment k8sdemo-nok must provide app/release labels for pod selectors
> not ok 3 - /home/training/training/deployment/demoapp-errors.yaml - k8sdemo-nok must include Kubernetes recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels
> 1..1
> # Warnings
> not ok 1 - /home/training/training/deployment/demoapp-service.yaml - Found service k8sdemo-service but services are not allowed
> 1..3
> not ok 1 - /home/training/training/deployment/demoapp.yaml - Containers must not run as root in Deployment k8sdemo
> not ok 2 - /home/training/training/deployment/demoapp.yaml - Deployment k8sdemo must provide app/release labels for pod selectors
> not ok 3 - /home/training/training/deployment/demoapp.yaml - k8sdemo must include Kubernetes recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels
> 1..1
> # Warnings
> not ok 1 - /home/training/training/deployment/demoapp-backend-service.yaml - Found service k8sdemo-backend-service but services are not allowed
- Launch a test against sample Dockerfile
The output format here is set to JSON.
conftest test -p ~/training/conftest/src/examples/docker/policy --output=json ~/training/conftest/src/examples/docker/Dockerfile
A blakclisted base image has been detected:
[
{
"filename": "/home/training/training/conftest/src/examples/docker/Dockerfile",
"Warnings": [],
"Failures": [
"blacklisted image found [\"openjdk:8-jdk-alpine\"]"
],
"Successes": []
}
]
Congratulations!!!
This concludes Lab 2 on Security Tooling.