Centos7使用kubeadm部署kubernetes-1.11.2
1、安装方式
1、 传统方式,以下组件全部运行在系统层面(yum或者rpm包),都为系统级守护进程
2、 kubeadm方式,master和node上的组件全部运行为pod容器,k8s也为pod
a) master/nodes
安装kubelet,kubeadm,docker
b) master:kubeadm init(完成集群初始化)
c) nodes:kubeadm join
2、规划
主机 IP
k8smaster 192.168.2.19
k8snode01 192.168.2.21
k8snode02 192.168.2.5
版本号:
docker: 18.06.1.ce-3.el7
OS: CentOS release 7
kubernetes: 1.11.2
kubeadm 1.11.2
kubectl 1.11.2
kubelet 1.11.2
etcdctl: 3.2.15
3、主机设置
3.1、关闭交换分区
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
3.2、初始化系统
关闭SELinux和Firewall
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
setenforce
systemctl disable firewalld.service
systemctl disable firewalld
systemctl stop firewalld
防火墙开启转发
iptables -P FORWARD ACCEPT
安装必要和常用工具
yum install -y epel-release vim vim-enhanced lrzsz unzip ntpdate sysstat dstat wget mlocate mtr lsof iotop bind-tools git net-tools
其他
#时间设置
timedatectl set-timezone Asia/Shanghai
ntpdate ntp1.aliyun.com
rm -f /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
#设置日志格式
systemctl restart rsyslog
echo 'export HISTTIMEFORMAT="%m/%d %T "' >> ~/.bashrc
source ~/.bashrc
#设置连接符
cat >> /etc/security/limits.conf << EOF
* soft nproc
* hard nproc
* soft nofile
* hard nofile
EOF
echo "ulimit -SHn 65535" >> /etc/profile
echo "ulimit -SHn 65535" >> /etc/rc.local
ulimit -SHn
sed -i 's/4096/10240/' /etc/security/limits.d/-nproc.conf
modprobe ip_conntrack
modprobe br_netfilter
cat >> /etc/rc.d/rc.local <<EOF
modprobe ip_conntrack
modprobe br_netfilter
EOF
chmod /etc/rc.d/rc.local
#设置内核
cat <<EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables =
net.bridge.bridge-nf-call-iptables =
net.ipv6.conf.all.disable_ipv6 =
net.ipv6.conf.default.disable_ipv6 =
net.ipv6.conf.lo.disable_ipv6 =
vm.swappiness =
vm.overcommit_memory=
vm.panic_on_oom=
kernel/panic=
kernel/panic_on_oops=
kernel.pid_max =
vm.max_map_count =
fs.aio-max-nr =
fs.file-max =
EOF
sysctl -p /etc/sysctl.conf
/sbin/sysctl -p # vim /etc/sysconfig/kubelet
#由于云环境默认没有swap分区,关闭swap选项
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
#设置kubeproxy模式为ipvs
KUBE_PROXY_MODE=ipvs
4、master设置yum源
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo vim kubernetes.repo
i
[kubernetes]
name=Kuberneters Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled= wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg rpm --import yum-key.gpg
rpm --import rpm-package-key.gpg
4.1、拷贝到nodes
由于本例是实验环境,如果生产环境存在saltstack和ansible等分发工具,可以用分发工具去做
1. 在服务器 S 上执行如下命令来生成配对密钥:
node01:
mkdir /root/.ssh -p
node02:
mkdir /root/.ssh -p
master:
ssh-keygen -t rsa
scp .ssh/id_rsa.pub node01:/root/.ssh/authorized_keys
scp .ssh/id_rsa.pub node02:/root/.ssh/authorized_keys
2、分发镜像源和验证秘钥
scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node01:/etc/yum.repos.d/
scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node02:/etc/yum.repos.d/ scp /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node01:/root
scp /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node02:/root
4.2、nodes上执行
rpm --import yum-key.gpg
rpm --import rpm-package-key.gpg
5、master安装k8s
yum install docker-ce kubelet-1.11. kubeadm-1.11. kubectl-1.11. -y
5.1、配置镜像源
安装完毕后需要初始化,此时需要下载镜像
由于国内没办法访问Google的镜像源,变通的方法有两种:
1、是从其他镜像源下载后,修改tag。
A、执行下面这个Shell脚本即可。
#!/bin/bash
images=(kube-proxy-amd64:v1.11.2 kube-scheduler-amd64:v1.11.2 kube-controller-manager-amd64:v1.11.2 kube-apiserver-amd64:v1.11.2
etcd-amd64:3.2. coredns:1.1. pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14. k8s-dns-kube-dns-amd64:1.14.
k8s-dns-dnsmasq-nanny-amd64:1.14. )
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com
done
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
B、从官方拉取
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.
docker pull coredns/coredns:1.1.
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2 k8s.gcr.io/kube-scheduler-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2 k8s.gcr.io/kube-apiserver-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2 k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2. k8s.gcr.io/etcd-amd64:3.2.
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.1. k8s.gcr.io/coredns:1.1.
C、从阿里云拉取
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14. k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14. k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14. k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1. k8s.gcr.io/etcd-amd64:3.1.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
2、是有代理服务器
vim /usr/lib/systemd/system/docker.service
[Service]下增加:
Environment="HTTPS_PROXY=$PROXY_SERVER_IP:PORT"
Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16"
5.2、配置镜像加速器
Docker 官方和国内很多云服务商都提供了国内加速器服务,例如:
当配置某一个加速器地址之后,若发现拉取不到镜像,请切换到另一个加速器地址。
国内各大云服务商均提供了 Docker 镜像加速服务,建议根据运行 Docker 的云平台选择对应的镜像加速服务。
使用阿里云加速器或者其他加速器:https://yeasy.gitbooks.io/docker_practice/content/install/mirror.html
配置镜像加速器,您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://registry-mirror.qiniu.com","https://zsmigm0p.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
5.3、重启服务
systemctl daemon-reload
systemctl restart docker.service
设置K8S开机自启
systemctl enable kubelet
systemctl enable docker
5.4、初始化master节点
kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/ --service-cidr=10.96.0.0/ --ignore-preflight-errors=all
过程简介
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0829 ::00.722283 kernel_validator.go:] Validating kernel version
I0829 ::00.722398 kernel_validator.go:] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
############certificates 证书##############
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.2.19 127.0.0.1 ::]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
############ 配置文件 ##############
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.503085 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: 0tjvur.56emvwc4k4ghwenz
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
v1.11版本正式命名
# 早期版本演进:skyDNS ---->kubeDNS---->CoreDNS
[addons] Applied essential addon: kube-proxy
# 作为附件运行,自托管在K8S上,动态为service资源生成ipvs(iptables)规则,.11版本开始默认使用IPvs,不支持的话自动降级到iptables
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube #在家目录下创建kube文件
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #admin.conf包含一些kubectl拿来做配置文件指定连接K8S的认证信息
sudo chown $(id -u):$(id -g) $HOME/.kube/config #改属主属组 You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node
as root:
#以root在其他任意节点上执行以下命令加入K8S集群中。
kubeadm join 192.168.2.19: --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf
注释
#--discovery-token-ca-cert-hash #发现master使用的哈希码验证
根据提示执行如下操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
master部署网络组件
CNI(容器网络插件): flannel #不支持网络策略 calico canel kube-router
注:如果是有代理服务器此处可以正常部署,如果没有,那么节点需要下载镜像后才可以部署
nodes:
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2
master:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
6、nodes安装k8s
下面的命令需要在所有的node上执行。
yum install docker-ce kubelet-1.11. kubeadm-1.11. kubectl-1.11. -y
6.1、nodes启动docker kubelet
master端
scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
scp /etc/sysconfig/kubelet node01:/etc/sysconfig/
scp /etc/sysconfig/kubelet node02:/etc/sysconfig/
nodes端
systemctl start docker
systemctl enable docker kubelet
6.2、加入master
这个token是在master节点上kubeadm初始化时最后给出的结果:
kubeadm join 192.168.2.19: --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf
###########如果token失效了##############
[root@master ~]# kubeadm token create
0wnvl7.kqxkle57l4adfr53
[root@node02 yum.repos.d]# kubeadm join 192.168.2.19:6443 --token 0wnvl7.kqxkle57l4adfr53 --discovery-token-unsafe-skip-ca-verification
7、验证
此时master执行如下命令,应该会出现master和nodes的状态为Ready。
# kubectl get nodes
Centos7使用kubeadm部署kubernetes-1.11.2的更多相关文章
- Centos7 使用 kubeadm 安装Kubernetes 1.13.3
目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...
- kubeadm安装kubernetes V1.11.1 集群
之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...
- 02 . Kubeadm部署Kubernetes及简单应用
kubeadm部署Kubernetes kubeadm简介 # kubeadm是一位高中生的作品,他叫Lucas Kaldstrom,芬兰人,17岁用业余时间完成的一个社区项目: # kubeadm的 ...
- 使用kubeadm部署Kubernetes v1.13.3
kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: 1. 安装要求 在开始之前,部署Kubernetes集群 ...
- [原]使用kubeadm部署kubernetes(一)
####################### 以下为声明 ##################### 在公众号 木子李的菜田 输入关键词: k8s 有系列安装文档 此文档是之前做笔记在 ...
- [转帖]CentOS 7 使用kubeadm 部署 Kubernetes
CentOS 7 使用kubeadm 部署 Kubernetes 关闭swap 执行swapoff临时关闭swap. 重启后会失效,若要永久关闭,可以编辑/etc/fstab文件,将其中swap分 ...
- 附025.kubeadm部署Kubernetes更新证书
一 查看证书 1.1 查看过期时间-方式一 1 [root@master01 ~]# tree /etc/kubernetes/pki/ 2 [root@master01 ~]# for tls in ...
- Kubeadm部署Kubernetes
Kubeadm部署Kubernetes 1.环境准备 主机名 IP 说明 宿主机系统 k8s-master 10.0.0.101 Kubernetes集群的master节点 Ubuntu2004 k8 ...
- centos7.1使用kubeadm部署kubernetes 1.16.2的master高可用
机器列表,配置域名解析 cat /etc/hosts192.168.200.210 k8s-master1192.168.200.211 k8s-master2192.168.200.212 k8s- ...
随机推荐
- sql 查询某个条件多条数据中最新的一条数据或最老的一条数据
sql 查询某个条件下多条数据中最新的一条数据或最老的一条数据 test_user表结构如下: 需求:查询李四.王五.李二创建的最初时间或者最新时间 1:查询最初的创建时间: SELECT * FRO ...
- linux下安装svn1.7
转自 https://blog.csdn.net/u011752559/article/details/11559573?locationNum=11&fps=1 1.下载svn安装包 wge ...
- vue路由--网站导航功能
1.首先需要按照Vue router支持 npm install vue-router然后需要在项目中引入: import Vue from 'vue' import VueRouter from ' ...
- 第十七节,OpenCV(学习六)图像轮廓检测
1.检测轮廓 轮廓检测是图像处理中经常用到的,OpenCV-Python接口中使用cv2.findContours()函数查找检测物体的轮廓. cv2.findContours(image, mode ...
- 在OS X 10.9配置WebDAV服务器联合NSURLSessionUploadTask实现文件上传
iOS7推出的NSURLSession简化了NSURLConnection的文件上传和下载的工作,本文记录如何配置WebDAV服务以支持PUT方式的文件上传. 一. 配置WebDAV服务器 1. 修改 ...
- W3CSchool闯关笔记(中级脚本算法)
坚持下去,编程是一门艺术,与君共勉!!! function sumAll(arr) { var result = 0; var sn = Math.min(arr[0] , arr[1]); var ...
- MongoDB数据库(一):基本操作
1.NoSQL的概念 "NoSQL"一词最早于1998年被用于一个轻量级的关系数据库的名字 随着web2.0的快速发展,NoSQL概念在2009年被提了出来 NoSQL最常见的解释 ...
- 2141:2333(zznuoj)
2141: 2333 时间限制: 1 Sec 内存限制: 128 MB提交: 77 解决: 17[提交] [状态] [讨论版] [命题人:admin] 题目描述 “别人总说我瓜,其实我一点也不瓜, ...
- linux工具-journalctl查询日志
有时候,当linux服务启动失败的时候,系统会提示我们使用journalctl -xe命令来查询详细信息,定位服务不能启动的原因. journalctl 用来查询 systemd-journald 服 ...
- 函数式编程 - 函数缓存Memoization
函数式编程风格中有一个"纯函数"的概念,纯函数是一种无副作用的函数,除此之外纯函数还有一个显著的特点:对于同样的输入参数,总是返回同样的结果.在平时的开发过程中,我们也应该尽量把无 ...