Kyverno: Simplifying Kubernetes Policy Management
1. Introduction to Kyverno
What is Kyverno, and why is it important?
Kyverno is a Kubernetes-native policy engine designed to simplify the management of security, compliance and configuration of policies in kubernetes clusters. Unlike traditional policy engines that require complex custom coding, Kyverno uses simple Yaml configurations that are easy to write, read and maintain for kubernetes users. Basically its a abstraction layer to that custom coding and you do’nt need to think of implementation just use it as a tool to deploy your policies.
So many of you might be thinking what are policies. Policies are basically some set of rules and validation checks that kyverno employ in your cluster so that every request to the cluster can be filtered through it, manipulated through it and then If everything seems okayish then it will pass the request to the cluster for furthur processing.
Why is Kyverno important?
Kubernetes Security and Governance: Kyverno helps enforce security policies, ensuring workloads comply with organizational and regulatory standards.
Automation of Best Practices: It simplifies enforcing Kubernetes best practices by applying policies across the cluster.
Ease of Use: Designed for Kubernetes users, Kyverno eliminates the need to learn a new policy language, relying instead on native Kubernetes constructs like Custom Resource Definitions (CRDs).
Declarative and Kubernetes-Native: Kyverno seamlessly integrates with Kubernetes, enabling users to work within the same ecosystem they're familiar with.
2. Key Features and Advantages of Kyverno Over Other Policy Engines
Key Features
Policy Types:
Validation: Ensures Kubernetes resources meet specific criteria (e.g., disallow privileged containers).
Mutation: Automatically modifies resources to confirm to organizational standards (e.g., adding labels to pods).
Generation: Creates new resources or updates existing ones automatically (e.g., generating ConfigMaps or NetworkPolicies).
Kyverno serves as a Kubernetes-policy engine that leverages Custom Resource Definitions ( CRDs ) to define and enforce policies. While Kubernetes already has an inbuilt policy mechanism through admission controllers, Kyverno stands out by offering additional features like validation, mutation and generation, all implemented as CRDs making it more flexible and user-friendly.
Kubernetes-Native Approach:
Kyverno uses CRDs to define policies makes it more suitable for Kubernetes users compared to other engines like Open Policy Agent (OPA), which requires Rego—a separate policy language.Dynamic Context Support:
Policies can fetch external data dynamically (e.g. from a ConfigMap or Secret) to make real-time decisions.Built-In CLI Tooling:
Kyverno provides a CLI for testing, validating, and applying policies locally, aiding in CI/CD pipeline integration.Audit Mode:
Policies can be applied in an audit mode, helping organizations understand potential violations without blocking resources.Open Source:
Backed by an active community, Kyverno is free to use and constantly evolving with new features.For more information on upcoming features and what evolving new. Please check its official docs.
Advantages Over OPA/Gatekeeper
Feature | Kyverno | OPA/Gatekeeper |
Policy Language | YAML (familiar to Kubernetes users) | Rego (custom policy language) |
Ease of Use | Intuitive for Kubernetes practitioners | Steeper learning curve with Rego |
Policy Types | Validation, Mutation, Generation | Validation only |
Kubernetes-Native | Fully integrates with Kubernetes CRDs | Requires additional setup |
Dynamic Data Fetch | Supports context-based decisions | Limited support |
3. How Kyverno Fits into Kubernetes for Policy Management
Kyverno acts as an admission controller in Kubernetes, intercepting API server requests to enforce or validate policies before changes are applied to the cluster. It integrates seamlessly into the Kubernetes lifecycle, ensuring compliance without disrupting operations.
Common Use Cases in Kubernetes
Security Policies: Enforce restrictions on container images, disallow privileged containers, and manage RBAC rules.
Configuration Consistency: Automatically add labels, annotations, or resource limits to workloads.
Multi-Tenancy: Enforce namespace isolation and ensure compliance across different tenants in a shared cluster.
Automation: Generate or patch resources like ConfigMaps, Secrets, or NetworkPolicies dynamically.
Kyverno Workflow
Policy Creation: Users define policies as YAML manifests.
Policy Application: Kyverno applies these policies cluster-wide or to specific namespaces.
Validation and Feedback: Kyverno validates resources, provides feedback, and blocks non-compliant resources if necessary.
2. Installing and Setting Up Kyverno
Kyverno is easy to install and integrate into a kubernetes cluster. This section covers the prerequisites, installation methods, and steps to verify your installation.
1. Prerequisites for Installing Kyverno
Before you install Kyverno, ensure the following prerequisites are met:
Kubernetes Cluster
A working Kubernetes cluster (v1.25 or later) is required. Kyverno relies on the Kubernetes API for its operations. Check it in docs for more details.
You can use any Kubernetes provider like Minikube, KIND, or a managed service like GKE, AKS, or EKS.
kubectl
Install and configure
kubectl
(Kubernetes CLI) on your machine to interact with the cluster.Verify your Kubernetes connection using:
kubectl cluster-info
Helm (Optional)
If you prefer installing Kyverno via Helm, ensure Helm is installed.
Verify Helm using:
helm version
2. Step-by-Step Guide for Installing Kyverno
Kyverno can be installed using two primary methods: Helm and YAML Manifests. Below are the detailed instructions for both:
Method 1: Installing Kyverno with Helm
Helm is a package manager for Kubernetes and offers an easy way to install Kyverno.
Add the Kyverno Helm Repository:
Run the following command to add the Kyverno Helm chart repository:Standalone Installation : helm repo add kyverno https://kyverno.github.io/kyverno/ helm repo update helm install kyverno kyverno/kyverno -n kyverno --create-namespace High-Availability Installation : helm install kyverno kyverno/kyverno -n kyverno --create-namespace \ --set admissionController.replicas=3 \ --set backgroundController.replicas=2 \ --set cleanupController.replicas=2 \ --set reportsController.replicas=2
For more information see ref doc.
Install Kyverno:
Use thehelm install
command to deploy Kyverno:helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace
Verify the Installation:
Ensure Kyverno is deployed successfully by checking thekyverno
namespace:kubectl get pods -n kyverno
Method 2: Installing Kyverno with YAML Manifests
If you prefer not to use Helm, you can directly apply the official Kyverno YAML manifests.
Download the Installation File:
Use the following command to download and apply the Kyverno manifests:kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.11.1/install.yaml
Verify the Installation:
Check that the Kyverno components (Deployments, Pods, etc.) are running in the kyverno namespace:kubectl get all -n kyverno
3. Verifying Kyverno Installation in Your Kubernetes Cluster
Once Kyverno is installed, verify its functionality using the following steps:
Check Kyverno Components:
List all running Kyverno Pods:kubectl get pods -n kyverno
Example output:
NAME READY STATUS RESTARTS AGE kyverno-6bfc5f7b94-wxjzq 1/1 Running 0 5m
Verify Kyverno’s Admission Controller:
Kyverno operates as a Kubernetes admission controller. Verify the webhook configurations:kubectl get validatingwebhookconfigurations
Example output:
NAME WEBHOOKS AGE kyverno-policy-validating-webhook-cfg 1 5m
Test Kyverno Policies:
Apply a sample Kyverno policy to ensure it’s working:kubectl create -f https://github.com/kyverno/kyverno/raw/main/config/install-latest-testing.yaml
This policy prevents the use of the latest tag in container images.
Validate Policy Enforcement:
Test the policy by creating a pod with the latest tag:apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: nginx image: nginx:latest
Apply this YAML file:
kubectl apply -f test-pod.yaml
Kyverno should reject the pod with an error message indicating the policy violation.
3. Understanding Kyverno Policies
Kyverno policies are kubernetes-native resources that enable the enforcement, modification and generation of configurations within a kubernetes cluster. They are designed to manage and secure kubernetes workloads by using a declarative approach, without requiring users to learn complex query language.
What Are Kyverno Policies and How Do They Work?
A Kyverno policy defines a set of rules that operate on Kubernetes resources. These policies are implemented as Custom Resource Definitions ( CRDs ), making them behave like any other Kubernetes resource. Kyverno policies by leveraging Kubernetes admission controllers to evaluate and enforce rules during resource creation, updates, or deletions.
Key Highlights:
Policies are written in YAML, making them intuitive and Kubernetes-native.
They allow administrators to enforce best practices, automatically fix configurations, and create new resources when needed.
Policies operate at three levels:
a. Validation: Ensure resources meet compliance standards.
b. Mutation: Modify resources to conform to desired configurations.
c. Generation: Automatically create or manage related resources.
Types of Kyverno Policies
1. Validation Policies: Ensuring Compliance
Validation policies enforce rules that check if Kubernetes resources comply with predefined conditions. These policies reject resources that do not meet the criteria.
Example Use Case:
Prevent pods from using the latest image tag, ensuring immutable deployments.
Validation Policy Example:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-latest-tag
spec:
rules:
- name: validate-latest-tag
match:
resources:
kinds:
- Pod
validate:
message: "Using 'latest' tag is not allowed."
pattern:
spec:
containers:
- image: "!*:latest"
Rule: Rejects Pods using the latest tag in container images.
Outcome: Any resource violating the policy is blocked during admission.
2. Mutation Policies: Modifying Resources Automatically
Mutation policies modify resource configurations to meet desired specifications. They are helpful for applying default values or fixing non-compliant configurations.
Example Use Case:
Automatically add resource limits and requests to Pods if not specified.
Mutation Policy Example:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-resource-limits
spec:
rules:
- name: add-resource-limits
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- resources:
limits:
memory: "512Mi"
cpu: "500m"
requests:
memory: "256Mi"
cpu: "250m"
Rule: Adds default resource limits and requests to all containers in a Pod.
Outcome: Pods missing resource configurations are automatically updated.
3. Generation Policies: Creating Resources on the Fly
Generation policies automatically create or update related resources when a specific resource is applied. This is useful for automating resource dependencies.
Example Use Case:
Automatically create a ConfigMap when a namespace is created.
Generation Policy Example:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-configmap
spec:
rules:
- name: generate-default-configmap
match:
resources:
kinds:
- Namespace
generate:
kind: ConfigMap
name: default-config
namespace: "{{request.object.metadata.name}}"
data:
app: "my-app"
Rule: Creates a
ConfigMap
nameddefault-config
in any newly created namespace.Outcome: Ensures consistent resource creation across namespaces.
Anatomy of a Kyverno Policy YAML File
A Kyverno policy consists of the following key components:
apiVersion: Specifies the API
version
(alwayskyverno.io/v1
).kind: Defines the type of Kyverno resource (
Policy
orClusterPolicy
).metadata: Contains metadata about the policy, such as its
name
.spec: Defines the rules and behavior of the policy.
Detailed Example of a Kyverno Policy:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-repositories
spec:
rules:
- name: validate-image-registry
match:
resources:
kinds:
- Pod
validate:
message: "Images must be from 'xyzcompany.com' registry only."
pattern:
spec:
containers:
- image: "xyzcompany.com/*"
Explanation of Each Section:
Field | Description |
apiVersion | Specifies the version of the Kyverno API. |
kind | Indicates the type of policy (Policy or ClusterPolicy ). |
metadata | Metadata for identifying the policy, including its name . |
spec | Defines the operational logic of the policy, including rules and behavior. |
rules | A list of rules that match, validate, mutate, or generate resources. |
match.resources | Specifies which Kubernetes resources the rule applies to. |
validate/mutate/etc. | Defines the action to be taken (validate, mutate, or generate resources). |
4. Writing Your First Policy
Kyverno makes it easy to write and manage policies using simple YAML syntax. In this section, you’ll learn how to create a basic validation policy, apply it to your Kubernetes cluster, and test it with resources.
Step 1: Creating a Simple Validation Policy
For our first policy, we will create a validation policy that prevents Pods from using the latest
tag in container images. Using the latest
tag is considered a bad practice because it leads to unpredictable behavior during deployments.
Policy YAML Example:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-latest-tag
spec:
rules:
- name: validate-latest-tag
match:
resources:
kinds:
- Pod
validate:
message: "Using the 'latest' image tag is not allowed. Please use a specific version."
pattern:
spec:
containers:
- image: "!*:latest"
Explanation of the Policy:
Field | Description |
apiVersion | Specifies the API version (kyverno.io/v1 ). |
kind | Declares the policy type (ClusterPolicy ). |
metadata.name | The name of the policy (disallow-latest-tag ). |
spec.rules.name | The name of the rule (validate-latest-tag ). |
match.resources | Defines which resources the rule applies to (in this case, Pods). |
validate.message | The message shown to users when the rule is violated. |
validate.pattern | Defines the rule logic (ensures the container image tag is not latest ). |
Step 2: Applying the Policy to a Kubernetes Cluster
To apply the policy to your Kubernetes cluster, follow these steps:
Save the Policy YAML File
Save the above YAML content to a file nameddisallow-latest-tag.yaml
.Apply the Policy
Usekubectl
to apply the policy to your cluster:kubectl apply -f disallow-latest-tag.yaml
Verify the Policy Installation
Check if the policy was installed successfully:kubectl get clusterpolicy
You should see the
disallow-latest-tag
policy listed.
Step 3: Testing the Policy with Different Resources
After applying the policy, you need to test it by creating Kubernetes resources to see if the policy works as intended.
Test Case 1: Violating the Policy
Create a Pod YAML with the
latest
Tag:apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: nginx:latest
Apply the Pod YAML:
kubectl apply -f test-pod.yaml
Test Case 2: Complying with the Policy
Create a Pod YAML with a Specific Tag:
apiVersion: v1 kind: Pod metadata: name: test-pod-valid spec: containers: - name: test-container image: nginx:1.21.1
Apply the Pod YAML:
kubectl apply -f test-pod-valid.yaml
Expected Outcome:
The Pod creation will succeed, as it complies with the policy.pod/test-pod-valid created
Troubleshooting Tips
Check Kyverno Logs
If the policy doesn’t behave as expected, check Kyverno logs for errors:kubectl logs -n kyverno deploy/kyverno
Simulate Policies Using Kyverno CLI
Use the Kyverno CLI to simulate policy behavior before applying it to the cluster:kyverno apply disallow-latest-tag.yaml --resource test-pod.yaml
Policy Status Check
Verify the status of your policy to ensure it’s active:kubectl get clusterpolicy disallow-latest-tag -o yaml
5. Common Use Cases of Kyverno
Kyverno is a powerful kubernetes-native policy engine that can automate and enforce compliance policies. Here, we’ll explore some common use cases with examples and YAML configs.
1. Enforcing Labels or Annotations on Pods and Namespaces
Organizations often require labels or annotations on resources for tracking, auditing, or cost management purposes. For example, all Pods must have a
team
label specifying the owning team.Policy Example: Require
team
Label on PodsapiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: require-team-label spec: rules: - name: validate-team-label match: resources: kinds: - Pod validate: message: "All Pods must have a 'team' label." pattern: metadata: labels: team: "?*"
Explanation:
The
match
section specifies that this policy applies to Pods.The
validate
rule checks if theteam
label exists.If the label is missing, the Pod creation is denied with a clear error message.
Testing:
Try creating a Pod without the
team
label; it will fail.Add the required label and retry; it will succeed.
2. Restricting Image Registries or Image Tags
To ensure the security of container images, you may want to restrict which registries or image tags can be used.
Policy Example: Allow Only Specific Image Registries
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-registry
spec:
rules:
- name: validate-image-registry
match:
resources:
kinds:
- Pod
validate:
message: "Only images from 'mycompany.registry.com' are allowed."
pattern:
spec:
containers:
- image: "mycompany.registry.com/*"
Policy Example: Disallow latest
Tag
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-latest-tag
spec:
rules:
- name: validate-image-tag
match:
resources:
kinds:
- Pod
validate:
message: "Using the 'latest' image tag is not allowed."
pattern:
spec:
containers:
- image: "!*:latest"
3. Ensuring Resource Limits and Requests Are Defined
Defining resource limits (cpu
and memory
) ensures fair resource allocation and avoids noisy neighbour problems. This policy ensures all Pods define both limits and requests.
Policy Example: Validate Resource Requests and Limits
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-resource-limits
spec:
rules:
- name: validate-resource-limits
match:
resources:
kinds:
- Pod
validate:
message: "Resource limits and requests must be specified for all containers."
pattern:
spec:
containers:
- resources:
limits:
cpu: "?*"
memory: "?*"
requests:
cpu: "?*"
memory: "?*"
Explanation:
- The
?*
wildcard ensures the resource values are defined but does not validate specific values.
4. Automating the Creation of ConfigMaps or Secrets
With Kyverno’s generation policies, you can automatically create or synchronize resources such as ConfigMaps or Secrets.
Policy Example: Auto-Generate a Namespace-Specific ConfigMap
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-configmap
spec:
rules:
- name: create-namespace-config
match:
resources:
kinds:
- Namespace
generate:
kind: ConfigMap
name: default-config
namespace: "{{request.object.metadata.name}}"
data:
app-config: |
key1: value1
key2: value2
Explanation:
When a new namespace is created, this policy automatically generates a
ConfigMap
nameddefault-config
in the namespace.The
data
field defines the key-value pairs for the ConfigMap.
6. Policy Mutation and Patching in Kyverno
Mutation and patching are two powerful features in Kyverno that allow Kubernetes users to modify resources dynamically as they are created or updated. These features are particularly useful for enforcing consistency, setting default configurations, and automating repetitive tasks in Kubernetes clusters.
What is Mutation in Kyverno?
Mutation in Kyverno allows you to modify Kubernetes resources before they are persisted in the cluster. This can include adding default values, updating labels, or patching specific fields of a resource. Mutation rules can be defined as part of a Kyverno policy, and they work seamlessly with other policy types like validation and generation.
Why Use Mutation in Kyverno?
Enforcing Standards: Automatically apply labels, annotations, or configuration settings to ensure resources adhere to organizational policies.
Simplifying Management: Automate repetitive configuration tasks, reducing manual errors.
Default Configurations: Set default values for fields when users omit them.
Types of Mutations
Add or Modify Fields: Adding or modifying fields in a Kubernetes resource.
Patch Existing Resources: Using a JSON Patch or Strategic Merge Patch to update resource configurations.
Conditional Mutation: Applying mutations only when certain conditions are met.
Writing a Mutation Policy
A mutation policy in Kyverno is defined using the mutate
field within a policy rule. Here's a breakdown of how to write a mutation policy:
Example 1: Adding Labels to Pods
This policy adds a default label (environment: dev
) to all pods in the default
namespace if the label is not already present.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-label
spec:
rules:
- name: add-environment-label
match:
resources:
kinds:
- Pod
namespaces:
- default
mutate:
patchStrategicMerge:
metadata:
labels:
environment: dev
Explanation:
The
match
block ensures the policy only applies to pods in thedefault
namespace.The
mutate
block usespatchStrategicMerge
to add theenvironment: dev
label if it's not already defined.
Example 2: Adding Default Resource Limits
This policy ensures all pods in the cluster have default CPU and memory resource limits if none are specified.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-resource-limits
spec:
rules:
- name: default-resource-limits
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- (name): "*"
resources:
limits:
memory: "512Mi"
cpu: "500m"
Explanation:
The wildcard
(name): "*"
matches all container names within a pod.Default resource limits (
memory
andcpu
) are added if they are not already present.
Patching with Kyverno
Kyverno supports two types of patching:
Strategic Merge Patch:
A Kubernetes-native approach to selectively modify parts of a resource.
Allows merging new fields without overwriting existing ones.
JSON Patch:
- A more granular approach where specific operations (e.g.,
add
,replace
,remove
) are applied to JSON paths within a resource.
- A more granular approach where specific operations (e.g.,
Example: Patching Using JSON Patch
The following policy ensures that all pods in the default
namespace run with a specific service account (default-sa
).
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: set-service-account
spec:
rules:
- name: set-default-sa
match:
resources:
kinds:
- Pod
namespaces:
- default
mutate:
patchesJson6902: |-
- op: replace
path: "/spec/serviceAccountName"
value: "default-sa"
Explanation:
The
patchesJson6902
field specifies a JSON Patch operation.This patch replaces the
serviceAccountName
field of all pods in thedefault
namespace withdefault-sa
.
Conditional Mutation Using Preconditions
You can make mutations conditional by using preconditions. Preconditions ensure that a mutation is only applied if specific conditions are met.
Example: Adding an Annotation Based on a Label
This policy adds an annotation to pods if they have the label team: dev
.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-annotation-conditionally
spec:
rules:
- name: add-annotation-if-label
match:
resources:
kinds:
- Pod
preconditions:
all:
- key: "{{ request.object.metadata.labels.team }}"
operator: Equals
value: dev
mutate:
patchStrategicMerge:
metadata:
annotations:
managed-by: "Kyverno"
Explanation:
The
preconditions
block checks if the labelteam: dev
exists.If true, the annotation
managed-by: Kyverno
is added to the pod.
Testing Mutation Policies
You can validate and test mutation policies using the Kyverno CLI. Here's how:
Install Kyverno CLI: Follow the installation instructions for the Kyverno CLI.
Test a Policy Locally: Run the following command to test your mutation policy on a sample resource file:
kyverno apply <policy.yaml> --resource <resource.yaml>
Review Results: The CLI will output the modified resource, showing the applied mutations.
Best Practices for Mutation Policies
Scope Policies Carefully:
Use
match
andexclude
blocks to target specific resources, namespaces, or labels.Avoid broad policies that could unintentionally modify unrelated resources.
Test Policies Thoroughly:
Use the Kyverno CLI to test mutation policies with different resource configurations.
Deploy policies in audit mode to monitor their impact before enforcing them.
Combine with Validation:
- Combine mutation with validation to ensure consistency and enforce standards.
Document Policies:
- Clearly document the purpose and scope of each mutation policy for easier maintenance.
7. Advanced Policy Features
Kyverno offers advanced features that allow you to create dynamic, context-aware policies for managing Kubernetes resources. These features include conditional policies, variables, JMESPath expressions, and external data fetching to make policies highly flexible and powerful.
1. Conditional Policies Using Preconditions
Preconditions allow you to apply rules conditionally based on resource properties. This is useful when you want a rule to be enforced only if specific criteria are met.
Example: Apply Policy Only to Namespaces with a Specific Label
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-team-label
spec:
rules:
- name: check-team-label
match:
resources:
kinds:
- Namespace
preconditions:
all:
- key: "{{ request.object.metadata.labels.env }}"
operator: Equals
value: "production"
validate:
message: "All production namespaces must have a 'team' label."
pattern:
metadata:
labels:
team: "?*"
Explanation:
The
preconditions
block ensures this policy applies only to namespaces with theenv=production
label.If the condition is met, the rule checks whether the
team
label exists.
2. Variables and JMESPath Expressions in Policies
Kyverno allows the use of variables and JMESPath expressions to extract or manipulate resource data dynamically. This makes policies more flexible by adapting to specific resource attributes.
Variables in Kyverno
Variables in Kyverno are placeholders that are replaced with actual values from the resource or request.
Example: Dynamic Namespace Label Validation
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: validate-ns-labels
spec:
rules:
- name: dynamic-label-check
match:
resources:
kinds:
- Namespace
validate:
message: "Namespace '{{ request.object.metadata.name }}' must have a 'project' label."
pattern:
metadata:
labels:
project: "{{ request.object.metadata.name }}"
Explanation:
{{
request.object.metadata.name
}}
dynamically inserts the namespace name into the policy message and validation logic.This ensures that the
project
label matches the namespace name.
Using JMESPath Expressions
JMESPath is a query language that lets you extract and manipulate JSON data. Kyverno supports JMESPath to transform or validate resource properties.
Example: Ensure All Containers Have CPU Requests Equal to or Greater Than 100m
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-cpu-requests
spec:
rules:
- name: validate-cpu-requests
match:
resources:
kinds:
- Pod
validate:
message: "All containers must request at least 100m CPU."
deny:
conditions:
any:
- key: "containers[?resources.requests.cpu < '100m'] | length(@)"
operator: GreaterThan
value: 0
Explanation:
The JMESPath expression checks if any container's CPU request is less than
100m
.If the condition is true, the Pod creation is denied.
3. Using Context to Fetch External Data for Policies
The context feature in Kyverno allows you to fetch data from external sources like ConfigMaps, Secrets, or even HTTP endpoints. This is useful for policies that rely on external configuration or dynamic inputs.
Example: Fetch Data from a ConfigMap
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: fetch-configmap-data
spec:
rules:
- name: validate-external-data
match:
resources:
kinds:
- Pod
context:
- name: team-config
configMap:
name: team-label-config
namespace: default
validate:
message: "Pod must have a valid 'team' label."
pattern:
metadata:
labels:
team: "{{ team-config.data.allowedTeam }}"
Explanation:
The
context
block fetches data from a ConfigMap namedteam-label-config
.The
allowedTeam
value from the ConfigMap is used to validate theteam
label on Pods.
Example: Fetch Data from an HTTP Endpoint
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: fetch-http-data
spec:
rules:
- name: validate-http-data
match:
resources:
kinds:
- Pod
context:
- name: external-data
apiCall:
url: "https://example.com/api/allowed-teams"
method: GET
validate:
message: "Pod must belong to a valid team."
pattern:
metadata:
labels:
team: "{{ external-data.response.teams[*] }}"
Explanation:
The
context
block makes an HTTP GET request to fetch data from an external API.The response is dynamically used to validate the
team
label.8. Validating and Testing Policies
Before deploying Kyverno policies to production, it’s crucial to validate and test them to ensure they behave as expected. Kyverno provides tools and methods to validate, test, and simulate policies locally or in your Kubernetes cluster without impacting live resources.
1. Tools for Validating Your Kyverno Policies
a. Validating Policies with
kubectl
Kyverno policies are written as Kubernetes custom resources (CRDs), so they can be validated using
kubectl
. This basic validation ensures the policy YAML is correctly formatted and complies with Kubernetes API standards.kubectl apply -f policy.yaml --dry-run=client
b. Kyverno CLI
The Kyverno CLI is a powerful tool for validating policies and testing them against resource manifests locally. The CLI ensures policies are well-formed and can detect syntax errors or invalid configurations.
Command to validate a policy file:
kyverno validate policy.yaml
If the policy contains errors, the CLI will display details, making it easy to fix issues before deployment.
2. Testing Policies Locally Using the Kyverno CLI
The Kyverno CLI allows you to test policies against sample resources locally, without requiring a Kubernetes cluster. This ensures policies behave as expected in various scenarios.
a. Setting Up Test Cases
To test policies, create a test case directory with:
The policy file (
policy.yaml
)Test resources (YAML files representing Kubernetes resources)
A test case configuration file (
test.yaml
)
Example Directory Structure
policy-example/
├── policy.yaml
├── resources/
│ ├── valid-pod.yaml
│ ├── invalid-pod.yaml
└── test.yaml
b. Writing a Test Case File
The test.yaml
defines the resources to be tested and the expected results.
Example: test.yaml
name: Test Pod Validation Policy
policies:
- policy.yaml
resources:
- resources/valid-pod.yaml
- resources/invalid-pod.yaml
results:
- resource: valid-pod.yaml
rule: "validate-pod-labels"
status: pass
- resource: invalid-pod.yaml
rule: "validate-pod-labels"
status: fail
c. Running Tests
Run the tests using the kyverno test
command:
kyverno test ./policy-example/
CLI Output
The output will show whether the policy passed or failed for each resource. Fix any errors in the policy or resources as needed.
3. Running Policies in Non-Enforcing Mode (Audit Mode)
When deploying a new policy, you can run it in audit mode to observe its effects without blocking or mutating resources. This is ideal for testing policies in a live environment without impacting resource creation or modification.
Enabling Audit Mode
To enable audit mode for a policy, set the validationFailureAction
field to audit
.
Example:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: validate-pod-labels
spec:
validationFailureAction: audit
rules:
- name: validate-labels
match:
resources:
kinds:
- Pod
validate:
message: "Pods must have a 'team' label."
pattern:
metadata:
labels:
team: "?*"
Behavior in Audit Mode
The policy does not block or mutate resources.
Violations are logged in the Kyverno policy report but do not affect resource creation.
Use this mode to gather feedback and refine policies before enforcing them.
Viewing Audit Logs
You can check policy violations in audit mode using Kyverno policy reports:
kubectl get polr
9. Policy Exceptions and Overrides
When working with Kyverno in dynamic Kubernetes environments, there may be scenarios where certain resources or namespaces need to be excluded from specific policies. Kyverno provides mechanisms to define exceptions and create namespace-specific policies to cater to such use cases.
1. Defining Exceptions for Specific Resources or Namespaces
a. Exclude Resources Using exclude
Field
Kyverno policies allow you to define exceptions for specific resources by using the exclude
field under the spec
section. This field specifies the conditions under which the policy will not apply.
Example: Excluding a Namespace
This example policy enforces labels on Pods but excludes the kube-system
namespace:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-pod-labels
spec:
validationFailureAction: enforce
rules:
- name: check-pod-labels
match:
resources:
kinds:
- Pod
exclude:
resources:
namespaces:
- kube-system
validate:
message: "All Pods must have a 'team' label."
pattern:
metadata:
labels:
team: "?*"
Explanation:
match
: Specifies the resources the policy should apply to (Pods in this case).exclude
: Specifies the namespace (kube-system
) to be excluded.
b. Exclude Specific Resources by Name
To exclude specific resources by name, add their names under the exclude
field.
Example: Excluding a Specific Pod
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-pod-labels
spec:
validationFailureAction: enforce
rules:
- name: check-pod-labels
match:
resources:
kinds:
- Pod
exclude:
resources:
names:
- test-pod
validate:
message: "All Pods must have a 'team' label."
pattern:
metadata:
labels:
team: "?*"
2. Writing Namespace-Specific Policies
a. Using Namespace Matching in Policies
Kyverno policies can be designed to apply only to specific namespaces by specifying the namespace under the match
field.
Example: Enforcing Labels Only in the production
Namespace
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-prod-labels
spec:
validationFailureAction: enforce
rules:
- name: enforce-labels
match:
resources:
namespaces:
- production
validate:
message: "All resources in the 'production' namespace must have a 'team' label."
pattern:
metadata:
labels:
team: "?*"
b. Writing Policies for Namespace-Specific Resources
Namespace-specific policies are useful for enforcing standards in particular namespaces, such as requiring stricter validation in production while allowing flexibility in development.
Example: Stricter Resource Limits for the production
Namespace
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-production-limits
spec:
validationFailureAction: enforce
rules:
- name: restrict-limits
match:
resources:
kinds:
- Pod
namespaces:
- production
validate:
message: "Pods in the production namespace must have strict resource limits."
pattern:
spec:
containers:
- resources:
limits:
memory: "256Mi"
cpu: "500m"
requests:
memory: "128Mi"
cpu: "250m"
3. Combining exclude
and Namespace-Specific Rules
In more complex setups, you might need to apply policies to all namespaces except certain ones. This can be achieved by combining match
and exclude
fields.
Example: Applying a Policy to All Namespaces Except development
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-global-policy
spec:
validationFailureAction: enforce
rules:
- name: global-policy
match:
resources:
kinds:
- Pod
exclude:
resources:
namespaces:
- development
validate:
message: "Pods must have a 'team' label."
pattern:
metadata:
labels:
team: "?*"
4. Practical Applications of Policy Exceptions and Overrides
Use Case: Excluding System Namespaces Exclude critical namespaces like
kube-system
,kube-public
, or namespaces used by Kubernetes operators to prevent unintended disruptions.Use Case: Differentiating Policies for Environments Apply stricter policies in production and more lenient ones in development or testing namespaces.
Use Case: Temporary Exceptions Use
exclude
to temporarily bypass policies for specific resources during debugging or maintenance without disabling the entire policy.
Best Practices
Start with Audit Mode: When creating policies with exceptions or namespace-specific rules, start with
validationFailureAction: audit
to observe the behavior before enforcing them.Keep Policies Modular: Instead of adding multiple
match
andexclude
conditions to a single policy, create separate policies for different namespaces or exceptions for clarity and maintainability.Document Exceptions: Clearly document why certain exceptions or overrides are in place to ensure they are revisited during audits or reviews.
Monitor Policy Reports: Use Kyverno’s policy reports to track how exceptions are affecting compliance.
10. Monitoring and Troubleshooting Kyverno
Monitoring and troubleshooting are essential aspects of working with Kyverno, ensuring that policies are functioning as expected and violations are promptly addressed. This section covers how to monitor Kyverno's behavior, debug failed policies, and understand policy violations to implement remediations.
1. Monitoring Kyverno Logs and Events
a. Accessing Kyverno Logs
Kyverno runs as a deployment in the kyverno
namespace (default). Logs from the Kyverno pod provide valuable insights into policy evaluations and rule processing.
Steps to Access Logs:
Identify the Kyverno pod:
kubectl get pods -n kyverno
Example output:
NAME READY STATUS RESTARTS AGE kyverno-6d7f7d8b79-xxxxx 1/1 Running 0 10m
Fetch logs from the pod:
kubectl logs -n kyverno kyverno-6d7f7d8b79-xxxxx
For continuous monitoring, use:
kubectl logs -n kyverno kyverno-6d7f7d8b79-xxxxx -f
b. Viewing Policy-Related Events
Kyverno generates Kubernetes events for policy evaluations. These events are linked to resources and provide quick insights into whether a policy was applied successfully or if a violation occurred.
View Events for a Namespace:
kubectl get events -n <namespace>
c. Using Kyverno Policy Reports
Kyverno generates policy reports for applied policies. These reports summarize policy violations and compliance status.
List cluster-wide policy reports:
kubectl get clusterpolicyreports
View policy reports for a specific namespace:
kubectl get policyreports -n <namespace>
Inspect details of a policy report:
kubectl describe clusterpolicyreport <report-name>
2. Debugging Failed Policies
a. Using Kyverno CLI for Local Testing
The Kyverno CLI is a powerful tool for testing policies locally before applying them to a cluster. You can simulate policy application on resources to identify issues.
Install Kyverno CLI: Download and install the CLI from the official documentation.
Test a Policy Locally:
kyverno test ./test.yaml --policy ./policy.yaml
test.yaml
: The resource you want to test.policy.yaml
: The policy to be tested.
b. Analyze Policy Failure Logs
Check Kyverno logs (from the pod) to identify why a policy failed:
Look for specific rule evaluations.
Identify invalid patterns or missing values in the resource.
c. Review Policy YAML for Errors
Common mistakes in policies:
Incorrect
match
orexclude
selectors.Invalid syntax in patterns or variables.
Missing or incorrect JMESPath expressions.
d. Testing Policies in Audit Mode
When applying a new or updated policy, start with validationFailureAction: audit
. This mode allows policies to detect violations without enforcing them, providing an opportunity to observe and troubleshoot before switching to enforce
.
Example Policy in Audit Mode:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-labels
spec:
validationFailureAction: audit
rules:
- name: require-labels
match:
resources:
kinds:
- Pod
validate:
message: "All Pods must have a 'team' label."
pattern:
metadata:
labels:
team: "?*"
3. Understanding Policy Violations and Remediations
a. Identifying Violations
Kyverno policy violations are reported in the following ways:
Policy Reports: Violations are summarized in
ClusterPolicyReport
orPolicyReport
.Events: Policy violations generate Kubernetes events attached to resources.
Check Violations in a Policy Report:
kubectl get clusterpolicyreport -o yaml
Sample output:
apiVersion: wgpolicyk8s.io/v1alpha2
kind: ClusterPolicyReport
metadata:
name: clusterpolicyreport
results:
- policy: enforce-labels
rule: require-labels
resource:
kind: Pod
name: test-pod
namespace: default
result: fail
message: "All Pods must have a 'team' label."
b. Remediating Violations
Understand the Error Message: Each violation provides a descriptive message explaining the issue (e.g., missing labels, invalid configurations).
Update the Resource: Modify the resource to comply with the policy's requirements. For example, add missing labels or update resource limits.
Example: Adding a Missing Label If the violation message states that a label is missing:
kubectl patch pod test-pod -n default --type=merge -p '{"metadata":{"labels":{"team":"dev"}}}'
- Test Again: After making changes, reapply the resource and observe if the violation is resolved.
c. Resolving External Data Issues
If the policy uses external data through the context
field, ensure that the data source (e.g., a ConfigMap or API) is accessible and contains valid data.
4. Best Practices for Monitoring and Troubleshooting
Enable Logging: Configure Kyverno with an appropriate log level (e.g.,
INFO
orDEBUG
) for better visibility during development or debugging.Start in Audit Mode: Always apply new or updated policies in
audit
mode to monitor their behavior without impacting workloads.Test Locally with CLI: Use the Kyverno CLI to test policies with representative resources before deploying them to a cluster.
Leverage Policy Reports: Regularly review
PolicyReport
andClusterPolicyReport
objects to ensure compliance and detect violations.Document Exceptions: Clearly document the reasons for policy exceptions or overrides to prevent confusion during audits.
13. Comparing Kyverno with OPA/Gatekeeper
Both Kyverno and OPA (Open Policy Agent) with Gatekeeper are powerful Kubernetes policy engines. However, they differ in their architecture, usability, and features. Here's a detailed comparison:
Feature-by-Feature Comparison
Feature | Kyverno | OPA/Gatekeeper |
Policy Language | YAML (familiar and Kubernetes-native). | Rego (custom policy language; steep learning curve). |
Policy Types | - Validation | |
- Mutation | ||
- Generation | - Validation (mutation is experimental and less mature). | |
Integration with Kubernetes | Built specifically for Kubernetes; policies defined as CRDs. | Generic policy engine (requires Gatekeeper for Kubernetes integration). |
Ease of Use | Simple YAML-based syntax; Kubernetes-native concepts (e.g., resource kinds). | Requires learning Rego, a domain-specific language (DSL). |
Mutating Policies | Supports automatic resource mutation (e.g., adding labels, modifying specs). | Mutation support is experimental and less intuitive. |
Contextual Policies | Fetches external data using context and JMESPath expressions. | Uses Rego with integration to external data sources. |
Audit Mode | Can run policies in audit mode to observe impacts without enforcing. | Gatekeeper also supports audit mode. |
CLI Support | Kyverno CLI for local testing, validation, and testing policies offline. | OPA CLI supports policy evaluation but is more generic and requires custom configuration. |
Learning Curve | Lower (leverages Kubernetes CRDs and YAML). | Higher (requires understanding Rego). |
Community Support | Actively growing community; Kubernetes SIG support. | Mature, but broader use cases mean Kubernetes-specific support can be slower. |
Performance | Designed for Kubernetes-specific workloads; efficient handling of large clusters. | More generic; can introduce overhead in Kubernetes environments without careful optimization. |
Use Cases | Kubernetes-native policies for validation, mutation, and resource generation. | General-purpose policy engine for a wide range of environments (Kubernetes, APIs, microservices, etc.). |
When to Use Kyverno vs. OPA
Use Kyverno If | Use OPA/Gatekeeper If |
You need a Kubernetes-native solution. | You need a generic policy engine for non-Kubernetes environments. |
You prefer writing policies in YAML. | You are comfortable with Rego and prefer more expressive policy writing. |
You want to use mutation and resource generation features. | You need advanced validation for custom applications beyond Kubernetes. |
Your team wants an easier-to-learn and Kubernetes-aligned solution. | Your organization already uses OPA for other use cases and wants consistency. |
14. Real-World Examples
Case Studies of Kyverno in Production
Automating Resource Compliance for Multi-Tenant Clusters
Scenario: A company operates a multi-tenant Kubernetes cluster and needs to enforce resource limits and namespace isolation.
Solution: Kyverno enforces validation policies to ensure all deployments specify CPU and memory requests/limits and use unique namespaces.
Outcome: Resource compliance improved, preventing noisy neighbor issues and resource contention.
Enforcing Security Standards
Scenario: A financial services company needs all images to be pulled from a trusted private registry.
Solution: Kyverno policies validate image registries and deny deployments using untrusted sources.
Outcome: Enhanced security posture and compliance with regulatory requirements.
Streamlining ConfigMap Management
Scenario: A development team frequently needs specific
ConfigMaps
for application deployment.Solution: Kyverno generation policies automatically create
ConfigMaps
when new namespaces are created.Outcome: Simplified developer workflows and reduced configuration errors.
Examples of Complex Policies Solving Real-World Problems
Ensuring TLS Usage for Ingress
Policy to validate that all
Ingress
resources specify TLS configurations:apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: enforce-ingress-tls spec: rules: - name: validate-ingress-tls match: resources: kinds: - Ingress validate: message: "Ingress resources must specify TLS." pattern: spec: tls: - secretName: "?*"
Dynamic Configuration Based on Namespace Labels
- Automatically add labels to all pods based on the namespace's labels.
Securing Pods with PodSecurity Standards
- Enforce policies based on Kubernetes PodSecurity Standards (restricted, baseline, etc.).
Last but not the least, Kyverno is a policy management engine and doing great in Cloud native domain.