准备工作:

时间同步

systemctl stop iptables.service
systemctl stop firewalld.service

安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce-18.09*

systemctl start docker
systemctl enable docker

vim /etc/docker/daemon.json

{
"registry-mirrors": ["https://3s01e0d2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}

systemctl daemon-reload
systemctl restart docker

1.配置kubernetes yum源。
阿里镜像源:https://opsx.alibaba.com/mirror

配置yum源:
[kubernetes]
name=kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm --import rpm-package-key.gpg

master
2.安装:
yum install kubelet-1.14.3  kubeadm-1.14.3  kubectl-1.14.3 docker-ce-18.09*

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

kubeadm 初始化

忽略swap
vim /etc/sysconfig/kubelet
[root@heaven00 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

初试化命令:

kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

镜像仓库:
quay.io

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.12.2
docker pull mirrorgooglecontainers/kube-controller-amd64:v1.12.2
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.12.2
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.12.2
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.12.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull mirrorgooglecontainers/coredns:1.2.2
docker pull mirrorgooglecontainers/coredns-amd64:1.2.2
docker pull coredns/coredns:1.2.2

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.250.0.89:6443 --token aji2ef.f103v7o45h7hjeld \
--discovery-token-ca-cert-hash sha256:fd01c8ced3c470d3d1bf35350c8cb8bbba82fa808f14573654c9ffaf4e29fca6

查看状态:
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 18m v1.14.3
[root@k8s-master ~]#

安装网络组建flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-27wg7 1/1 Running 0 26m
coredns-8686dcc4fd-ks8hc 1/1 Running 0 26m
etcd-k8s-master 1/1 Running 0 25m
kube-apiserver-k8s-master 1/1 Running 0 25m
kube-controller-manager-k8s-master 1/1 Running 0 26m
kube-flannel-ds-amd64-kgfwd 1/1 Running 0 2m15s
kube-proxy-js82w 1/1 Running 0 26m
kube-scheduler-k8s-master 1/1 Running 0 25m
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 27m v1.14.3

===================================================================================
node 节点:

yum install kubelet-1.14.3  kubeadm-1.14.3  kubectl-1.14.3 docker-ce-18.09*

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 73m v1.14.3
k8s-node1 Ready <none> 2m25s v1.14.3

==================================================
部署过程中需要拉取镜像国内非常慢

有大佬写了个工具,亲测好使。
https://github.com/xuxinkun/littleTools#azk8spull
https://www.cnblogs.com/xuxinkun/p/11025020.html

=================================
报错1:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

解决:
修改/etc/docker/daemon.json文件

{
"exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker

systemctl daemon-reload
systemctl restart docker

报错2:k8s.gcr.io连接不上之后,会报如下的错误:

初始化k8s时候,

[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1

解决方法:
通过别的镜像仓库,先将镜像下载下来然后再修改成k8s.gcr.io对应的tag

pull镜像
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1

修改tag:
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

=====================================
问题3:
[root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@heaven00 lib]# rm -rf /etc/kubernetes /var/lib/etcd -rf
[root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@heaven00 lib]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@heaven00 lib]# netstat -tulnp | grep 10250
tcp6 0 0 :::10250 :::* LISTEN 17007/kubelet
[root@heaven00 lib]# systemctl stop kubelet
[root@heaven00 lib]# netstat -tulnp | grep 10250
[root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [heaven00 localhost] and IPs [10.139.165.32 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [heaven00 localhost] and IPs [10.139.165.32 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [heaven00 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.139.165.32]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.502305 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

message log:

Jun 20 12:04:33 heaven00 kubelet: E0620 12:04:33.878745 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:33 heaven00 kubelet: E0620 12:04:33.978932 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.035393 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.079094 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.179282 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.235901 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: W0620 12:04:34.274044 18155 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.279465 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.379615 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.435968 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.479820 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.580004 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.601351 18155 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.636902 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.680184 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.780347 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.836914 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.880525 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.980697 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.036733 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.080854 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.181033 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.237132 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.281221 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:37 heaven00 telegraf: 2019-06-20T04:04:37Z E! [inputs.ping]: Error in plugin: host www.google.com: signal: killed
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25606_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25606_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25606_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25606_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25611_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25611_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25611_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25611_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25629_systemd_test_default.slice.

##
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

解决方法:
找了新机器重新部署,发现没有这个问题了。

============================================================================================================
kubeadm init初始化成功后会打印出node 加入master的命令,如下:

kubeadm join 10.239.44.68:6443 --token 8jxvj4.5lop20zjbu48h6kl \
--discovery-token-ca-cert-hash sha256:1ca8f0a098601b94d7c2a9b4a3758ff0880a0213db813336dec0e9272ed55a78
注意:kubeadm init生成的token有效期只有1天,如果你的node节点在使用kubeadm join时出现如下错误

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
请到master上检查你所使用的token是否有效,kubeadm token list

49y4v3.jxq5w76jj5hh028u <invalid> 2019-04-13T15:00:47-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
8jxvj4.5lop20zjbu48h6kl 23h 2019-04-25T10:21:41-04:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
生成不过期的token

kubeadm token create --ttl 0 --print-join-command
============================================================================================================

2.使用kubeadm快速搭建k8s集群的更多相关文章

  1. kubeadm快速搭建k8s集群

    环境 master01:192.168.1.110 (最少2核CPU) node01:192.168.1.100 规划 services网络:10.96.0.0/12 pod网络:10.244.0.0 ...

  2. 教你用multipass快速搭建k8s集群

    目录 前言 一.multipass快速入门 安装 使用 二.使用multipass搭建k8s集群 创建3台虚拟机 安装master节点 安装node节点 测试k8s集群 三.其他问题 不能拉取镜像:报 ...

  3. 通过kubeadm快速部署K8S集群

    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: # 创建一个 Master 节点 $ kubeadm i ...

  4. centos环境 使用kubeadm快速安装k8s集群v1.16.2

    全程使用root用户运行,宿主机需要连接外网 浏览一下官方kubeadm[有些镜像用不了] https://kubernetes.io/docs/setup/production-environmen ...

  5. kubeadm方式搭建K8S集群

    一.kubeadm介绍 二.安装要求 三.集群规划 四.环境初始化(在每个服务器节点操作) 1.关闭防火墙 2.关闭selinux 3.关闭swap 4.根据规划设置主机名 5.在Master添加ho ...

  6. kubeadm 搭建 K8S集群

    kubeadm是K8s官方推荐的快速搭建K8s集群的方法. 环境: Ubuntu 16.04 1 安装docker Install Docker from Ubuntu’s repositories: ...

  7. 使用Kubeadm(1.13+)快速搭建Kubernetes集群

    Kubeadm是管理集群生命周期的重要工具,从创建到配置再到升级,Kubeadm处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心Kubernetes组件,以便为新节点提供安全而简单的连接流程并 ...

  8. kubeadm搭建K8s集群及Pod初体验

    基于Kubeadm 搭建K8s集群: 通过上一篇博客,我们已经基本了解了 k8s 的基本概念,也许你现在还是有些模糊,说真的我也是很模糊的.只有不断地操作去熟练,强化自己对他的认知,才能提升境界. 我 ...

  9. Kubernetes探索学习001--Centos7.6使用kubeadm快速部署Kubernetes集群

    Centos7.6使用kubeadm快速部署kubernetes集群 为什么要使用kubeadm来部署kubernetes?因为kubeadm是kubernetes原生的部署工具,简单快捷方便,便于新 ...

随机推荐

  1. 2018 牛客网暑期ACM多校训练营(第一场) E Removal (DP)

    Removal 链接:https://ac.nowcoder.com/acm/contest/139/E来源:牛客网 题目描述 Bobo has a sequence of integers s1, ...

  2. P2057 善意的投票 最小割理解

    实现时这样建图:直接将S连向同意的人,T连向不同意的人,若两人是朋友,则在他们之间连一条双向边 #include<bits/stdc++.h> #define il inline usin ...

  3. mysql主从同步监控---邮件告警

    #!/bin/bash #check MySQL_Slave Status #crontab time : MYSQLPORT=`netstat -na|grep "|awk -F[:&qu ...

  4. Python3下UnicodeDecodeError:‘ASCII’ codec cant decode..(128)

    今天准备用Keras跑一下LeNet的程序,结果总是编码出错 源代码是2.7写的,编码格式是utf-8.然后尝试网上各种方法不适用,最后还是解决了 源代码: data = gzip.open(r'C: ...

  5. 4. CSS新特性之浏览器私有前缀

    1. 浏览器私有前缀 浏览器私有前缀是为了兼容老版本的写法,比较新版本的浏览器无需添加 -moz-:代表firefox浏览器私有属性 -ms-:代表ie浏览器私有属性 -webkit-:代表safar ...

  6. qt 启动参数 -qws

    运行嵌入式程序 在嵌入式QT版本中,程序需要服务器或自己作为服务器程序. 服务器程序构造的方法是构造一个QApplication::GuiServe类型的QApplication对象.或者使用-qws ...

  7. sed 和awk的执行方式

    sed 测试案例: 在有cat的行末开始追加<---,直到有dog的行结束 [root@L shells]# cat catDog.txt snake snake pig bird dog ca ...

  8. C++ 头文件的理解

    变量.函数在使用前必须被声明.至于函数里干了什么,编译时不关注,链接(link)时,才会去搜寻所有编译后的文件,寻找函数具体干了什么. *.h头文件干的事情就像“复制-粘贴”,哪里引用,就把*.h内容 ...

  9. 微信小程序开发入门教程(四)---自己动手做个小程序

    前面已将基础知识准备的差不多了,下面实际做一个小程序. 一.目标 用于上传照片和文字. 2个主要页面:我me,设置set 二.开始制作 1.打开微信开发者工具(我用的1.02.1907160 Wind ...

  10. [NOI2017]蚯蚓排队

    嘟嘟嘟 现在看来这道题还不是特别难. 别一看到字符串就想SAM 看到\(k\)很小,所以我们可以搞一个单次修改复杂度跟\(k\)有关的算法. 能想到,每一次断开或链接,最多只会影响\(k ^ 2\)个 ...