Configure a Pod to Use a PersistentVolume for Storage

how to configure a Pod to use a PersistentVolumeClaim for storage.

Here is a summary of the process:

  1. A cluster administrator creates a PersistentVolume that is backed by physical storage. The administrator does not associate the volume with any Pod.

  2. A cluster user creates a PersistentVolumeClaim, which gets automatically bound to a suitable PersistentVolume.

  3. The user creates a Pod that uses the PersistentVolumeClaim as storage.

Create a PersistentVolume

Kubernetes supports hostPath for development and testing on a single-node cluster.

A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.

In a production cluster, you would not use hostPath.

Instead a cluster administrator would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume.

Cluster administrators can also use StorageClasses to set up dynamic provisioning.

#create file

#Open a shell to the Node in your cluster.

#create a /mnt/data directory
mkdir /mnt/data #In the /mnt/data directory,create an index.html file
echo 'Hello from Kubernetes storage' > /mnt/data/index.html

  

kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

configuration file for the hostPath PersistentVolume 

The configuration file specifies that the volume is at /mnt/data on the cluster’s Node.

The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node.

It defines the StorageClass name manual for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.

#get PersistentVolumnClaim
kubectl get pv task-pv-volume

The output shows that the PersistentVolume has a STATUS of Available. This means it has not yet been bound to a PersistentVolumeClaim.  

Create a PersistentVolumeClaim

Pods use PersistentVolumeClaims to request physical storage.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi

configuration file for the PersistentVolumeClaim

create a PersistentVolumeClaim that requests a volume of at least three gibibytes that can provide read-write access for at least one Node.

After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim’s requirements.

If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.

#Look again at the PersistentVolume:
kubectl get pv task-pv-volume
# a STATUS of Bound. #Look at the PersistentVolumeClaim:
kubectl get pvc task-pv-claim
# shows that the PersistentVolumeClaim is bound to your PersistentVolume, task-pv-volume.

  

Create a Pod

create a Pod that uses your PersistentVolumeClaim as a volume.

kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage

the Pod’s configuration file specifies a PersistentVolumeClaim, but it does not specify a PersistentVolume.

From the Pod’s point of view, the claim is a volume.  

#Verify that the Container in the Pod is running;
kubectl get pod task-pv-pod #Get a shell to the Container running in your Pod:
kubectl exec -it task-pv-pod -- /bin/bash #In your shell, verify that nginx is serving the index.html file from the hostPath volume:
root@task-pv-pod:/# apt-get update
root@task-pv-pod:/# apt-get install curl
root@task-pv-pod:/# curl localhost #The output shows the text that you wrote to the index.html file on the hostPath volume:
Hello from Kubernetes storage

  

Access control

Storage configured with a group ID (GID)

allows writing only by Pods using the same GID.

Mismatched or missing GIDs cause permission denied errors.

 

To reduce the need for coordination with users, an administrator can annotate a PersistentVolume with a GID.

Then the GID is automatically added to any Pod that uses the PersistentVolume.

kind: PersistentVolume
apiVersion: v1
metadata:
name: pv1
annotations:
pv.beta.kubernetes.io/gid: "1234"

Use the pv.beta.kubernetes.io/gid annotation  

When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all Containers in the Pod in the same way that GIDs specified in the Pod’s security context are.

Every GID, whether it originates from a PersistentVolume annotation or the Pod’s specification, is applied to the first process run in each Container.

Configure a Pod to Use a Projected Volume for Storage

how to use a projected volume to mount several existing volume sources into the same directory.

Currently, secret, configMap, and downwardAPI volumes can be projected.

Configure a projected volume for a pod

apiVersion: v1
kind: Pod
metadata:
name: test-projected-volume
spec:
containers:
- name: test-projected-volume
image: busybox
args:
- sleep
- "86400"
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: user
- secret:
name: pass

create username and password Secrets from local files.

create a Pod that runs one Container, using a projected Volume to mount the Secrets into the same shared directory.

#Create the Secrets:

# Create files containing the username and password:
echo -n "admin" > ./username.txt
echo -n "1f2d1e2e67df" > ./password.txt # Package these files into secrets:
kubectl create secret generic user --from-file=./username.txt
kubectl create secret generic pass --from-file=./password.txt #Create the Pod:
kubectl create -f projected-volume.yaml #Verify that the Pod’s Container is running, and then watch for changes to the Pod:
kubectl get --watch pod test-projected-volume #In another terminal, get a shell to the running Container:
kubectl exec -it test-projected-volume -- /bin/sh #In your shell, verify that the projected-volume directory contains your projected sources:
ls /projected-volume/

 

Configure a Security Context for a Pod or Container

A security context defines privilege and access control settings for a Pod or Container.

Security context settings include:

  • Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).

  • Security Enhanced Linux (SELinux): Objects are assigned security labels.

  • Running as privileged or unprivileged.

  • Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.

  • AppArmor: Use program profiles to restrict the capabilities of individual programs.

  • Seccomp: Filter a process’s system calls.

  • AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the no_new_privs flag gets set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has CAP_SYS_ADMIN.

Set the security context for a Pod

apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false

To specify security settings for a Pod, include the securityContext field in the Pod specification.

The securityContext field is a PodSecurityContext object.

The security settings that you specify for a Pod apply to all Containers in the Pod.

a configuration file for a Pod that has a securityContext and an emptyDir volume

In the configuration file, the runAsUser field specifies that for any Containers in the Pod, the first process runs with user ID 1000.

The fsGroup field specifies that group ID 2000 is associated with all Containers in the Pod.

Group ID 2000 is also associated with the volume mounted at /data/demo and with any files created in that volume.

kubectl exec -it security-context-demo -- sh

#list the running processes:
ps aux #The output shows that the processes are running as user 1000, which is the value of runAsUser:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1000 1 0.0 0.0 4336 724 ? Ss 18:16 0:00 /bin/sh -c node server.js
1000 5 0.2 0.6 772124 22768 ? Sl 18:16 0:00 node server.js
... #navigate to /data, and list the one directory:
cd /data
ls -l # shows that the /data/demo directory has group ID 2000, which is the value of fsGroup
drwxrwsrwx 2 root 2000 4096 Jun 6 20:08 demo #navigate to /data/demo, and create a file:
cd demo
echo hello > testfile #List the file in the /data/demo directory:
ls -l #The output shows that testfile has group ID 2000, which is the value of fsGroup.
-rw-r--r-- 1 1000 2000 6 Jun 6 20:08 testfile #Exit your shell:
exit

  

Set the security context for a Container

apiVersion: v1
kind: Pod
metadata:
name: security-context-demo-2
spec:
securityContext:
runAsUser: 1000
containers:
- name: sec-ctx-demo-2
image: gcr.io/google-samples/node-hello:1.0
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false

To specify security settings for a Container, include the securityContext field in the Container manifest. The securityContext field is a SecurityContext object.

Security settings that you specify for a Container apply only to the individual Container, and they override settings made at the Pod level when there is overlap.

Container settings do not affect the Pod’s Volumes.

the configuration file for a Pod that has one Container. Both the Pod and the Container have a securityContext field:

#Get a shell into the running Container:
kubectl exec -it security-context-demo-2 -- sh #In your shell, list the running processes:
ps aux #The output shows that the processes are running as user 2000. This is the value of runAsUser specified for the Container. It overrides the value 1000 that is specified for the Pod.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
2000 1 0.0 0.0 4336 764 ? Ss 20:36 0:00 /bin/sh -c node server.js
2000 8 0.1 0.5 772124 22604 ? Sl 20:36 0:00 node server.js
...

  

Set capabilities for a Container

With Linux capabilities, you can grant certain privileges to a process without granting all the privileges of the root user.

To add or remove Linux capabilities for a Container, include the capabilities field in the securityContext section of the Container manifest.

apiVersion: v1
kind: Pod
metadata:
name: security-context-demo-3
spec:
containers:
- name: sec-ctx-3
image: gcr.io/google-samples/node-hello:1.0

First, see what happens when you don’t include a capabilities field.

Here is configuration file that does not add or remove any Container capabilities:

kubectl exec -it security-context-demo-3 -- sh

ps aux

#The output shows the process IDs (PIDs) for the Container:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4336 796 ? Ss 18:17 0:00 /bin/sh -c node server.js
root 5 0.1 0.5 772124 22700 ? Sl 18:17 0:00 node server.js #In your shell, view the status for process 1:
cd /proc/1
cat status #The output shows the capabilities bitmap for the process:
...
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
... #Make a note of the capabilities bitmap, and then exit your shell:
exit

  

Next, run a Container that is the same as the preceding container, except that it has additional capabilities set.

Here is the configuration file for a Pod that runs one Container. The configuration adds the CAP_NET_ADMIN and CAP_SYS_TIME capabilities:

apiVersion: v1
kind: Pod
metadata:
name: security-context-demo-4
spec:
containers:
- name: sec-ctx-4
image: gcr.io/google-samples/node-hello:1.0
securityContext:
capabilities:
add: ["NET_ADMIN", "SYS_TIME"]

 

kubectl exec -it security-context-demo-4 -- sh

#In your shell, view the capabilities for process 1:
cd /proc/1
cat status #The output shows capabilities bitmap for the process:
...
CapPrm: 00000000aa0435fb
CapEff: 00000000aa0435fb
... #Compare the capabilities of the two Containers:
00000000a80425fb
00000000aa0435fb

In the capability bitmap of the first container, bits 12 and 25 are clear.

In the second container, bits 12 and 25 are set. Bit 12 is CAP_NET_ADMIN, and bit 25 is CAP_SYS_TIME.

See capability.h for definitions of the capability constants.

Note:

Linux capability constants have the form CAP_XXX.

But when you list capabilities in your Container manifest, you must omit the CAP_ portion of the constant.

For example, to add CAP_SYS_TIME, include SYS_TIME in your list of capabilities.

Assign SELinux labels to a Container

To assign SELinux labels to a Container, include the seLinuxOptions field in the securityContext section of your Pod or Container manifest.

The seLinuxOptions field is an SELinuxOptions object.

Here’s an example that applies an SELinux level:

...
securityContext:
seLinuxOptions:
level: "s0:c123,c456"

  

Note:

To assign SELinux labels, the SELinux security module must be loaded on the host operating system.

The security context for a Pod applies to the Pod’s Containers and also to the Pod’s Volumes when applicable.

Specifically fsGroup and seLinuxOptions are applied to Volumes as follows:

  • fsGroup: Volumes that support ownership management are modified to be owned and writable by the GID specified in fsGroup. See the Ownership Management design document for more details.

  • seLinuxOptions: Volumes that support SELinux labeling are relabeled to be accessible by the label specified under seLinuxOptions. Usually you only need to set the level section. This sets the Multi-Category Security (MCS) label given to all Containers in the Pod as well as the Volumes.

 

kubernetes 实战3_命令_Configure Pods and Containers的更多相关文章

  1. kubernetes 实战4_命令_Configure Pods and Containers

    Configure Service Accounts for Pods A service account provides an identity for processes that run in ...

  2. kubernetes 实战2_命令_Configure Pods and Containers

    --以yaml格式输出:pod\configmap\service\ingress\deployment kubectl get pod platform-financeapi-deployment- ...

  3. kubernetes 实战5_命令_Assign Pods to Nodes&Configure a Pod to Use a ConfigMap

    Assign Pods to Nodes how to assign a Kubernetes Pod to a particular node in a Kubernetes cluster. Ad ...

  4. kubernetes 实战6_命令_Share Process Namespace between Containers in a Pod&Translate a Docker Compose File to Kubernetes Resources

    Share Process Namespace between Containers in a Pod how to configure process namespace sharing for a ...

  5. kubernetes实战(二十八):Kubernetes一键式资源管理平台Ratel安装及使用

    1. Ratel是什么? Ratel是一个Kubernetes资源平台,基于管理Kubernetes的资源开发,可以管理Kubernetes的Deployment.DaemonSet.Stateful ...

  6. Kubernetes实战总结 - 阿里云ECS自建K8S集群

    一.概述 详情参考阿里云说明:https://help.aliyun.com/document_detail/98886.html?spm=a2c4g.11186623.6.1078.323b1c9b ...

  7. kubernetes实战(二十六):kubeadm 安装 高可用 k8s v1.16.x dashboard 2.x

    1.基本配置 基本配置.内核升级.基本服务安装参考https://www.cnblogs.com/dukuan/p/10278637.html,或者参考<再也不踩坑的Kubernetes实战指南 ...

  8. kubernetes实战(二十九):Kubernetes RBAC实现不同用户在不同Namespace的不同权限

    1.基本说明 在生产环境使用k8s以后,大部分应用都实现了高可用,不仅降低了维护成本,也简化了很多应用的部署成本,但是同时也带来了诸多问题.比如开发可能需要查看自己的应用状态.连接信息.日志.执行命令 ...

  9. kubernetes实战(三十):CentOS 8 二进制 高可用 安装 k8s 1.17.x

    1. 基本说明 本文章将演示CentOS 8二进制方式安装高可用k8s 1.17.x,相对于其他版本,二进制安装方式并无太大区别. 2. 基本环境配置 主机信息 192.168.1.19 k8s-ma ...

随机推荐

  1. Java基础语法(二 )

    五.运算符 *算术运算符 *赋值运算符 *关系运算符 *逻辑运算符 *位运算符 *三目运算符 算术运算符 *+,-,*,/都是比较简单的操作 *+的几种作用: 加法 正数 字符串连接符 *除法的时候要 ...

  2. 前端 dojo

    http://dojotoolkit.org/documentation/tutorials/1.10/hello_dojo/ html在线编辑器 国内 http://runjs.cn 国外 http ...

  3. redis 缓存刷新

  4. HIBERNATE与 MYBATIS的对比

    我是一名java开发人员,hibernate以及mybatis都有过学习,在java面试中也被提及问道过,在项目实践中也应用过,现在对hibernate和mybatis做一下对比,便于大家更好的理解和 ...

  5. flask 使用宏渲染表单(包含错误信息)

    在模板中渲染表单时,有大量的工作: 1.调用字段属性,获取<input>定义 2.调用对应的label属性,获取<label>定义 3.渲染错误消息 为了避免为每一个字段重复这 ...

  6. golang学习笔记12 beego table name `xxx` repeat register, must be unique 错误问题

    golang学习笔记12 beego table name `xxx` repeat register, must be unique 错误问题 今天测试了重新建一个项目生成新的表,然后复制到旧的项目 ...

  7. linux yum配置本地iso镜像

    1.本地源配置:cdiso.repo 将iso镜像文件中所有内容复制到/public/software/cdrom 中,节点将本地yum指向此处. [root@node19 ~]# vim /etc/ ...

  8. Python+OpenCV图像处理(二)——打印图片属性、设置图片存储路径、电脑摄像头的调取和显示

    一. 打印图片属性.设置图片存储路径 代码如下: #打印图片的属性.保存图片位置 import cv2 as cv import numpy as np #numpy是一个开源的Python科学计算库 ...

  9. linux学习笔记---grep

    先来讲讲grep(搜索过滤) 1.命令格式: grep [option] pattern file 2.命令参数: -a 不要忽略二进制的数据 -A<显示行数>          除了显示 ...

  10. java根据图片的url地址下载图片到本地

    package com.daojia.haobo.aicircle.util; import sun.misc.BASE64Encoder; import java.io.*; import java ...