Configure Service Accounts for Pods

A service account provides an identity for processes that run in a Pod.

This is a user introduction to Service Accounts. See also the Cluster Admin Guide to Service Accounts.

When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster).

Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).

Use the Default Service Account to access the API server.

When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace.

If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/podname -o yaml), you can see the spec.serviceAccountName field has been automatically set.

kubectl get pods platform-website-deployment-77b9cfc887-l9g66 -o yaml -n alpha

  

In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:

apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
...

In version 1.6+, you can also opt out of automounting API credentials for a particular pod:

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...

The pod spec takes precedence over the service account if both specify a automountServiceAccountToken value.

Use Multiple Service Accounts.

Every namespace has a default service account resource called default.

You can list this and any other serviceAccount resources in the namespace with this command:

# serviceAccounts
kubectl get serviceAccounts

  

can create additional ServiceAccount objects like this:

$ cat > /tmp/serviceaccount.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
EOF $ kubectl create -f /tmp/serviceaccount.yaml
serviceaccount "build-robot" created

  

If you get a complete dump of the service account object, like this:

$ kubectl get serviceaccounts/build-robot -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2015-06-16T00:12:59Z
name: build-robot
namespace: default
resourceVersion: "272500"
selfLink: /api/v1/namespaces/default/serviceaccounts/build-robot
uid: 721ab723-13bc-11e5-aec2-42010af0021e
secrets:
- name: build-robot-token-bvbk5

then you will see that a token has automatically been created and is referenced by the service account.  

You may use authorization plugins to set permissions on service accounts.

To use a non-default service account, simply set the spec.serviceAccountName field of a pod to the name of the service account you wish to use.

The service account has to exist at the time the pod is created, or it will be rejected.

You cannot update the service account of an already created pod.

You can clean up the service account from this example like this:

$ kubectl delete serviceaccount/build-robot

  

Manually create a service account API token.

Suppose we have an existing service account named “build-robot” as mentioned above, and we create a new secret manually.

$ cat > /tmp/build-robot-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: build-robot-secret
annotations:
kubernetes.io/service-account.name: build-robot
type: kubernetes.io/service-account-token
EOF
$ kubectl create -f /tmp/build-robot-secret.yaml
secret "build-robot-secret" created

  

Now you can confirm that the newly built secret is populated with an API token for the “build-robot” service account.

Any tokens for non-existent service accounts will be cleaned up by the token controller.

$ kubectl describe secrets/build-robot-secret
Name: build-robot-secret
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name=build-robot
kubernetes.io/service-account.uid=da68f9c6-9d26-11e7-b84e-002dc52800da Type: kubernetes.io/service-account-token Data
====
ca.crt: 1338 bytes
namespace: 7 bytes
token: ...

  

Add ImagePullSecrets to a service account

First, create an imagePullSecret, as described here.

Next, verify it has been created. For example:

$ kubectl get secrets myregistrykey
NAME TYPE DATA AGE
myregistrykey kubernetes.io/.dockerconfigjson 1 1d

  

Next, modify the default service account for the namespace to use this secret as an imagePullSecret.

kubectl patch serviceaccount default -p '{\"imagePullSecrets\": [{\"name\": \"acrkey\"}]}'

  

Interactive version requiring manual edit:

$ kubectl get serviceaccounts default -o yaml > ./sa.yaml

$ cat sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2015-08-07T22:02:39Z
name: default
namespace: default
resourceVersion: "243024"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
secrets:
- name: default-token-uudge $ vi sa.yaml
[editor session not shown]
[delete line with key "resourceVersion"]
[add lines with "imagePullSecrets:"] $ cat sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2015-08-07T22:02:39Z
name: default
namespace: default
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 052fb0f4-3d50-11e5-b066-42010af0d7b6
secrets:
- name: default-token-uudge
imagePullSecrets:
- name: myregistrykey $ kubectl replace serviceaccount default -f ./sa.yaml
serviceaccounts/default

  

Now, any new pods created in the current namespace will have this added to their spec:

spec:
imagePullSecrets:
- name: myregistrykey

  

Pull an Image from a Private Registry

how to create a Pod that uses a Secret to pull an image from a private Docker registry or repository.

Log in to Docker

must authenticate with a registry in order to pull a private image:

docker login

When prompted, enter your Docker username and password.

The login process creates or updates a config.json file that holds an authorization token.

View the config.json file:

cat ~/.docker/config.json

  

The output contains a section similar to this:

{
"auths": {
"https://index.docker.io/v1/": {
"auth": "c3R...zE2"
}
}
}

 

Note:

If you use a Docker credentials store, you won’t see that auth entry ,but a credsStore entry with the name of the store as value. 

Create a Secret in the cluster that holds your authorization token

A Kubernetes cluster uses the Secret of docker-registry type ,to authenticate with a container registry to pull a private image.

Create this Secret, naming it regcred:

kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

where:

  • <your-registry-server> is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)
  • <your-name> is your Docker username.
  • <your-pword> is your Docker password.
  • <your-email> is your Docker email.

You have successfully set your Docker credentials in the cluster as a Secret called regcred.

Inspecting the Secret regcred

To understand the contents of the regcred Secret you just created, start by viewing the Secret in YAML format:

kubectl get secret regcred --output=yaml

#The output is similar to this:
apiVersion: v1
data:
.dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=
kind: Secret
metadata:
...
name: regcred
...
type: kubernetes.io/dockerconfigjson

The value of the .dockerconfigjson field is a base64 representation of your Docker credentials.  

To understand what is in the .dockerconfigjson field, convert the secret data to a readable format:

kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 -d 

The output is similar to this:

{"auths":{"yourprivateregistry.com":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com","auth":"c3R...zE2"}}}

  

To understand what is in the auth field, convert the base64-encoded data to a readable format:

echo "c3R...zE2" | base64 -d

The output, username and password concatenated with a :, is similar to this:

janedoe:xxxxxxxxxxx

  

Notice that the Secret data contains the authorization token similar to your local ~/.docker/config.json file.

You have successfully set your Docker credentials as a Secret called regcred in the cluster.

Create a Pod that uses your Secret

a configuration file for a Pod that needs access to your Docker credentials in regcred:

apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred

  

Download the above file:

wget -O my-private-reg-pod.yaml https://k8s.io/docs/tasks/configure-pod-container/private-reg-pod.yaml

  

In file my-private-reg-pod.yaml, replace <your-private-image> with the path to an image in a private registry such as:

janedoe/jdoe-private:v1

  

To pull the image from the private registry, Kubernetes needs credentials.

The imagePullSecrets field in the configuration file specifies that Kubernetes should get the credentials from a Secret named regcred.

 

Create a Pod that uses your Secret, and verify that the Pod is running:

kubectl create -f my-private-reg-pod.yaml
kubectl get pod private-reg

  

Configure Liveness and Readiness Probes

how to configure liveness and readiness probes for Containers

The kubelet uses liveness probes to know when to restart a Container.

For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress.

Restarting a Container in such a state can help to make the application more available despite bugs.

The kubelet uses readiness probes to know when a Container is ready to start accepting traffic.

A Pod is considered ready when all of its Containers are ready.

One use of this signal is to control which Pods are used as backends for Services.

When a Pod is not ready, it is removed from Service load balancers.

Define a liveness command

Many applications running for long periods of time eventually transition to broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations.

create a Pod that runs a Container based on the k8s.gcr.io/busybox image. Here is the configuration file for the Pod:

apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5

  

In the configuration file, you can see that the Pod has a single Container.

The periodSeconds field specifies that the kubelet should perform a liveness probe every 5 seconds.

The initialDelaySeconds field tells the kubelet that it should wait 5 second before performing the first probe.

To perform a probe, the kubelet executes the command cat /tmp/healthy in the Container. If the command succeeds, it returns 0, and the kubelet considers the Container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the Container and restarts it.

#When the Container starts, it executes this command:
/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"

  

For the first 30 seconds of the Container’s life, there is a /tmp/healthy file.

So during the first 30 seconds, the command cat /tmp/healthy returns a success code.

After 30 seconds, cat /tmp/healthy returns a failure code.

#Create the Pod:
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml #Within 30 seconds, view the Pod events:
kubectl describe pod liveness-exec #The output indicates that no liveness probes have failed yet:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e

  

#After 35 seconds, view the Pod events again:
kubectl describe pod liveness-exec #At the bottom of the output, there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory

  

#Wait another 30 seconds, and verify that the Container has been restarted:
kubectl get pod liveness-exec #The output shows that RESTARTS has been incremented:
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 1 1m

  

Define a liveness HTTP request

Another kind of liveness probe uses an HTTP GET request.

Here is the configuration file for a Pod that runs a container based on the k8s.gcr.io/liveness image.

apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
args:
- /server
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3

  

In the configuration file, you can see that the Pod has a single Container.

The periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds.

The initialDelaySeconds field tells the kubelet that it should wait 3 seconds before performing the first probe.

To perform a probe, the kubelet sends an HTTP GET request to the server that is running in the Container and listening on port 8080. If the handler for the server’s /healthz path returns a success code, the kubelet considers the Container to be alive and healthy. If the handler returns a failure code, the kubelet kills the Container and restarts it.

Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.

You can see the source code for the server in server.go.

For the first 10 seconds that the Container is alive, the /healthz handler returns a status of 200. After that, the handler returns a status of 500.

http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
duration := time.Now().Sub(started)
if duration.Seconds() > 10 {
w.WriteHeader(500)
w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds())))
} else {
w.WriteHeader(200)
w.Write([]byte("ok"))
}
})

The kubelet starts performing health checks 3 seconds after the Container starts.

So the first couple of health checks will succeed.

But after 10 seconds, the health checks will fail, and the kubelet will kill and restart the Container.  

#To try the HTTP liveness check, create a Pod:
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/http-liveness.yaml #After 10 seconds, view Pod events to verify that liveness probes have failed and the Container has been restarted:
kubectl describe pod liveness-http

  

Define a TCP liveness probe

A third type of liveness probe uses a TCP Socket.

With this configuration, the kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.

apiVersion: v1
kind: Pod
metadata:
name: goproxy
labels:
app: goproxy
spec:
containers:
- name: goproxy
image: k8s.gcr.io/goproxy:0.1
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20

As you can see, configuration for a TCP check is quite similar to an HTTP check.

This example uses both readiness and liveness probes.

The kubelet will send the first readiness probe 5 seconds after the container starts.

This will attempt to connect to the goproxy container on port 8080.

If the probe succeeds, the pod will be marked as ready.

The kubelet will continue to run this check every 10 seconds.

In addition to the readiness probe, this configuration includes a liveness probe.

The kubelet will run the first liveness probe 15 seconds after the container starts.

Just like the readiness probe, this will attempt to connect to the goproxy container on port 8080. If the liveness probe fails, the container will be restarted.

#To try the TCP liveness check, create a Pod:
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/tcp-liveness-readiness.yaml #After 15 seconds, view Pod events to verify that liveness probes:
kubectl describe pod goproxy

  

Use a named port

You can use a named ContainerPort for HTTP or TCP liveness checks:

ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080 livenessProbe:
httpGet:
path: /healthz
port: liveness-port

  

Define readiness probes

Sometimes, applications are temporarily unable to serve traffic.

For example, an application might need to load large data or configuration files during startup.

In such cases, you don’t want to kill the application, but you don’t want to send it requests either.

Kubernetes provides readiness probes to detect and mitigate these situations.

A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.

Readiness probes are configured similarly to liveness probes.

The only difference is that you use the readinessProbe field instead of the livenessProbe field.

readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5

  

Configuration for HTTP and TCP readiness probes also remains identical to liveness probes.

Readiness and liveness probes can be used in parallel for the same container.

Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail.

Configure Probes

Probes have a number of fields that you can use to more precisely control the behavior of liveness and readiness checks:

  • initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated.
  • periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
  • timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.
  • successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.
  • failureThreshold: When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the Pod. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.

HTTP probes have additional fields that can be set on httpGet:

  • host: Host name to connect to, defaults to the pod IP. You probably want to set “Host” in httpHeaders instead.
  • scheme: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
  • path: Path to access on the HTTP server.
  • httpHeaders: Custom headers to set in the request. HTTP allows repeated headers.
  • port: Name or number of the port to access on the container. Number must be in the range 1 to 65535.

For an HTTP probe, the kubelet sends an HTTP request to the specified path and port to perform the check.

The kubelet sends the probe to the pod’s IP address, unless the address is overridden by the optional host field in httpGet.

If scheme field is set to HTTPS, the kubelet sends an HTTPS request skipping the certificate verification.

In most scenarios, you do not want to set the host field.

Here’s one scenario where you would set it. Suppose the Container listens on 127.0.0.1 and the Pod’s hostNetwork field is true. Then host, under httpGet, should be set to 127.0.0.1.

If your pod relies on virtual hosts, which is probably the more common case, you should not use host, but rather set the Host header in httpHeaders.

kubernetes 实战4_命令_Configure Pods and Containers的更多相关文章

  1. kubernetes 实战3_命令_Configure Pods and Containers

    Configure a Pod to Use a PersistentVolume for Storage how to configure a Pod to use a PersistentVolu ...

  2. kubernetes 实战2_命令_Configure Pods and Containers

    --以yaml格式输出:pod\configmap\service\ingress\deployment kubectl get pod platform-financeapi-deployment- ...

  3. kubernetes 实战5_命令_Assign Pods to Nodes&Configure a Pod to Use a ConfigMap

    Assign Pods to Nodes how to assign a Kubernetes Pod to a particular node in a Kubernetes cluster. Ad ...

  4. kubernetes 实战6_命令_Share Process Namespace between Containers in a Pod&Translate a Docker Compose File to Kubernetes Resources

    Share Process Namespace between Containers in a Pod how to configure process namespace sharing for a ...

  5. kubernetes实战(二十八):Kubernetes一键式资源管理平台Ratel安装及使用

    1. Ratel是什么? Ratel是一个Kubernetes资源平台,基于管理Kubernetes的资源开发,可以管理Kubernetes的Deployment.DaemonSet.Stateful ...

  6. Kubernetes实战总结 - 阿里云ECS自建K8S集群

    一.概述 详情参考阿里云说明:https://help.aliyun.com/document_detail/98886.html?spm=a2c4g.11186623.6.1078.323b1c9b ...

  7. kubernetes实战(二十六):kubeadm 安装 高可用 k8s v1.16.x dashboard 2.x

    1.基本配置 基本配置.内核升级.基本服务安装参考https://www.cnblogs.com/dukuan/p/10278637.html,或者参考<再也不踩坑的Kubernetes实战指南 ...

  8. kubernetes实战(二十九):Kubernetes RBAC实现不同用户在不同Namespace的不同权限

    1.基本说明 在生产环境使用k8s以后,大部分应用都实现了高可用,不仅降低了维护成本,也简化了很多应用的部署成本,但是同时也带来了诸多问题.比如开发可能需要查看自己的应用状态.连接信息.日志.执行命令 ...

  9. kubernetes实战(三十):CentOS 8 二进制 高可用 安装 k8s 1.17.x

    1. 基本说明 本文章将演示CentOS 8二进制方式安装高可用k8s 1.17.x,相对于其他版本,二进制安装方式并无太大区别. 2. 基本环境配置 主机信息 192.168.1.19 k8s-ma ...

随机推荐

  1. 【Linux学习九】负载均衡

    环境 虚拟机:VMware 10 Linux版本:CentOS-6.5-x86_64 客户端:Xshell4 FTP:Xftp4 一.高并发 随着应用访问量的增加,带来高并发处理问题. 具体有两个: ...

  2. 多线程(threading)示例

    一.多线程简单示例 import threading,time print('第一线程(默认):程序开始啦!') def takeANap(): time.sleep(5) print('第二线程:5 ...

  3. python 文件路径名,文件名,后缀名的操作

    需要使用路径名来获取文件名,目录名,绝对路径等等. 使用os.path 模块中的函数来操作路径名.下面是一个交互式例子来演示一些关键的特性: >>> import os >&g ...

  4. Axis2之Spring装配

    本章主要介绍axis2接口在spring项目中的整合配置. 使用jar包:axis2-1.6.2 spring2.5.6 目录结构: 关键代码: package com.alfred.bean; pu ...

  5. (Review cs231n)loss function and optimization

    分类器需要在识别物体变化时候具有很好的鲁棒性(robus) 线性分类器(linear classifier)理解为模板的匹配,根据数量,表达能力不足,泛化性低:理解为将图片看做在高维度区域 线性分类器 ...

  6. 微信小程序制作家庭记账本之一

    制作的第一天,思索着制作手机端APP还是微信小程序,首先是想到制作APP但是各种收费让我不得不换一条路,所以开始制作小程序,下载了微信小程序开发工具,试着学习制作方法,但是似乎没有成效,但我坚信要一步 ...

  7. activiti 报 next dbid

    记录一下吧. 今天将生产环境的几个服务节点改成集群模式,其中包含activiti审批服务节点,其中各个服务几点间数据通信采用MQ(与本文无关). 然后报出如题错误. 究其原因就是,在启动activit ...

  8. js定时器优化

    在js中如果打算使用setInterval进行倒数,计时等功能,往往是不准确的,因为setInterval的回调函数并不是到时后立即执行,而是等系统计算资源空闲下来后才会执行.而下一次触发时间则是在s ...

  9. GUI常用对话框3

    %进度条 %waitbar h=waitbar(,'实例'); get(h); %获得进度条的子对象 get(get(h,'Children')) ha=get(h,'Children'); %获得坐 ...

  10. SOAPUI 案例操作步骤

    1. 构建项目 2. 运行单个请求 3. 构建测试用例 4. 接口之间传递参数 5. 运行整个测试用例 构建测试 以天气接口为例: 接口: http://ws.webxml.com.cn/WebSer ...