Kubernetes学习之路(四)之Node节点二进制部署
K8S Node节点部署
1、部署kubelet
(1)二进制包准备
- [root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/
- [root@linux-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
- [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.120:/opt/kubernetes/bin/
- [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.130:/opt/kubernetes/bin/
(2)创建角色绑定
kubelet启动时会向kube-apiserver发送tsl bootstrap请求,所以需要将bootstrap的token设置成对应的角色,这样kubectl才有权限创建该请求。
- [root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
- clusterrolebinding "kubelet-bootstrap" created
(3)创建 kubelet bootstrapping kubeconfig 文件 设置集群参数
- [root@linux-node1 ~]# cd /usr/local/src/ssl
[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \- --certificate-authority=/opt/kubernetes/ssl/ca.pem \
- --embed-certs=true \
- --server=https://192.168.56.110:6443 \
- --kubeconfig=bootstrap.kubeconfig
- Cluster "kubernetes" set.
(4)设置客户端认证参数
- [root@linux-node1 ssl]# kubectl config set-credentials kubelet-bootstrap \
- --token=ad6d5bb607a186796d8861557df0d17f \
- --kubeconfig=bootstrap.kubeconfig
- User "kubelet-bootstrap" set.
(5)设置上下文参数
- [root@linux-node1 ssl]# kubectl config set-context default \
- --cluster=kubernetes \
- --user=kubelet-bootstrap \
- --kubeconfig=bootstrap.kubeconfig
- Context "default" created.
(6)选择默认上下文
- [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
- Switched to context "default".
- [root@linux-node1 ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg
- [root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.120:/opt/kubernetes/cfg
- [root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.130:/opt/kubernetes/cfg
2、部署kubelet 1.设置CNI支持
(1)配置CNI
- [root@linux-node2 ~]# mkdir -p /etc/cni/net.d
- [root@linux-node2 ~]# vim /etc/cni/net.d/-default.conf
- {
- "name": "flannel",
- "type": "flannel",
- "delegate": {
- "bridge": "docker0",
- "isDefaultGateway": true,
- "mtu":
- }
- }
- [root@linux-node3 ~]# mkdir -p /etc/cni/net.d
- [root@linux-node2 ~]# scp /etc/cni/net.d/-default.conf 192.168.56.130:/etc/cni/net.d/-default.conf
(2)创建kubelet数据存储目录
- [root@linux-node2 ~]# mkdir /var/lib/kubelet
- [root@linux-node3 ~]# mkdir /var/lib/kubelet
(3)创建kubelet服务配置
- [root@linux-node2 ~]# vim /usr/lib/systemd/system/kubelet.service
- [Unit]
- Description=Kubernetes Kubelet
- Documentation=https://github.com/GoogleCloudPlatform/kubernetes
- After=docker.service
- Requires=docker.service
- [Service]
- WorkingDirectory=/var/lib/kubelet
- ExecStart=/opt/kubernetes/bin/kubelet \
- --address=192.168.56.120 \
- --hostname-override=192.168.56.120 \
- --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
- --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
- --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
- --cert-dir=/opt/kubernetes/ssl \
- --network-plugin=cni \
- --cni-conf-dir=/etc/cni/net.d \
- --cni-bin-dir=/opt/kubernetes/bin/cni \
- --cluster-dns=10.1.0.2 \
- --cluster-domain=cluster.local. \
- --hairpin-mode hairpin-veth \
- --allow-privileged=true \
- --fail-swap-on=false \
- --logtostderr=true \
- --v= \
- --logtostderr=false \
- --log-dir=/opt/kubernetes/log
- Restart=on-failure
- RestartSec=
- [root@linux-node3 ~]# vim /usr/lib/systemd/system/kubelet.service
- [Unit]
- Description=Kubernetes Kubelet
- Documentation=https://github.com/GoogleCloudPlatform/kubernetes
- After=docker.service
- Requires=docker.service
- [Service]
- WorkingDirectory=/var/lib/kubelet
- ExecStart=/opt/kubernetes/bin/kubelet \
- --address=192.168.56.130 \
- --hostname-override=192.168.56.130 \
- --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
- --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
- --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
- --cert-dir=/opt/kubernetes/ssl \
- --network-plugin=cni \
- --cni-conf-dir=/etc/cni/net.d \
- --cni-bin-dir=/opt/kubernetes/bin/cni \
- --cluster-dns=10.1.0.2 \
- --cluster-domain=cluster.local. \
- --hairpin-mode hairpin-veth \
- --allow-privileged=true \
- --fail-swap-on=false \
- --logtostderr=true \
- --v= \
- --logtostderr=false \
- --log-dir=/opt/kubernetes/log
- Restart=on-failure
- RestartSec=
(4)启动Kubelet
- [root@linux-node2 ~]# systemctl daemon-reload
- [root@linux-node2 ~]# systemctl enable kubelet
- [root@linux-node2 ~]# systemctl start kubelet
- [root@linux-node2 kubernetes]# systemctl status kubelet
- [root@linux-node3 ~]# systemctl daemon-reload
- [root@linux-node3 ~]# systemctl enable kubelet
- [root@linux-node3 ~]# systemctl start kubelet
- [root@linux-node3 kubernetes]# systemctl status kubelet
在查看kubelet的状态,发现有如下报错Failed to get system container stats for "/system.slice/kubelet.service": failed to...
此时需要调整kubelet
的启动参数。
解决方法:
在/usr/lib/systemd/system/kubelet.service
的[service]
新增: Environment="KUBELET_MY_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"
修改ExecStart
: 在末尾新增$KUBELET_MY_ARGS
- [root@linux-node2 system]# systemctl status kubelet
- ● kubelet.service - Kubernetes Kubelet
- Loaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled)
- Active: active (running) since 四 -- :: CST; 16h ago
- Docs: https://github.com/GoogleCloudPlatform/kubernetes
- Main PID: (kubelet)
- CGroup: /system.slice/kubelet.service
- └─ /opt/kubernetes/bin/kubelet --address=192.168.56.120 --hostname-override=192.168.56.120 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experiment...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::09.355765 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::19.363906 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::29.385439 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::39.393790 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::49.401081 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::59.407863 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::09.415552 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::19.425998 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::29.443804 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- 6月 :: linux-node2.example.com kubelet[]: E0601 ::39.450814 summary.go:] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
- Hint: Some lines were ellipsized, use -l to show in full.
(5)查看csr请求 注意是在linux-node1上执行。
- [root@linux-node1 ssl]# kubectl get csr
- NAME AGE REQUESTOR CONDITION
- node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U 1m kubelet-bootstrap Pending
- node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA 1m kubelet-bootstrap Pending
(6)批准kubelet 的 TLS 证书请求
- [root@linux-node1 ssl]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
- certificatesigningrequest.certificates.k8s.io "node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U" approved
- certificatesigningrequest.certificates.k8s.io "node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA" approved
- [root@linux-node1 ssl]# kubectl get csr
- NAME AGE REQUESTOR CONDITION
- node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U 2m kubelet-bootstrap Approved,Issued
- node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA 2m kubelet-bootstrap Approved,Issued
- 执行完毕后,查看节点状态已经是Ready的状态了
- [root@linux-node1 ssl]# kubectl get node
- NAME STATUS ROLES AGE VERSION
- 192.168.56.120 Ready <none> 50m v1.10.1
- 192.168.56.130 Ready <none> 46m v1.10.1
3、部署Kubernetes Proxy
(1)配置kube-proxy使用LVS
- [root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack
- [root@linux-node3 ~]# yum install -y ipvsadm ipset conntrack
(2)创建 kube-proxy 证书请求
- [root@linux-node1 ~]# cd /usr/local/src/ssl/
- [root@linux-node1 ssl]# vim kube-proxy-csr.json
- {
- "CN": "system:kube-proxy",
- "hosts": [],
- "key": {
- "algo": "rsa",
- "size":
- },
- "names": [
- {
- "C": "CN",
- "ST": "BeiJing",
- "L": "BeiJing",
- "O": "k8s",
- "OU": "System"
- }
- ]
- }
(3)生成证书
- [root@linux-node1~]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
- -ca-key=/opt/kubernetes/ssl/ca-key.pem \
- -config=/opt/kubernetes/ssl/ca-config.json \
- -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
(4)分发证书到所有Node节点
- [root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
- [root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.120:/opt/kubernetes/ssl/
- [root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.120:/opt/kubernetes/ssl/
(5)创建kube-proxy配置文件
- [root@linux-node1 ssl]# kubectl config set-cluster kubernetes \
- --certificate-authority=/opt/kubernetes/ssl/ca.pem \
- --embed-certs=true \
- --server=https://192.168.56.110:6443 \
- --kubeconfig=kube-proxy.kubeconfig
- Cluster "kubernetes" set.
- [root@linux-node1 ssl]# kubectl config set-credentials kube-proxy \
- --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
- --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
- --embed-certs=true \
- --kubeconfig=kube-proxy.kubeconfig
- User "kube-proxy" set.
- [root@linux-node1 ssl]# kubectl config set-context default \
- --cluster=kubernetes \
- --user=kube-proxy \
- --kubeconfig=kube-proxy.kubeconfig
- Context "default" created.
- [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
- Switched to context "default".
(6)分发kubeconfig配置文件
- [root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
- [root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.120:/opt/kubernetes/cfg/
- [root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.130:/opt/kubernetes/cfg/
(7)创建kube-proxy服务配置
- [root@linux-node1 ssl]# mkdir /var/lib/kube-proxy
- [root@linux-node2 ssl]# mkdir /var/lib/kube-proxy
- [root@linux-node3 ssl]# mkdir /var/lib/kube-proxy
- [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
- [Unit]
- Description=Kubernetes Kube-Proxy Server
- Documentation=https://github.com/GoogleCloudPlatform/kubernetes
- After=network.target
- [Service]
- WorkingDirectory=/var/lib/kube-proxy
- ExecStart=/opt/kubernetes/bin/kube-proxy \
- --bind-address=192.168.56.120 \
- --hostname-override=192.168.56.120 \
- --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
- --masquerade-all \
- --feature-gates=SupportIPVSProxyMode=true \
- --proxy-mode=ipvs \
- --ipvs-min-sync-period=5s \
- --ipvs-sync-period=5s \
- --ipvs-scheduler=rr \
- --logtostderr=true \
- --v= \
- --logtostderr=false \
- --log-dir=/opt/kubernetes/log
- Restart=on-failure
- RestartSec=
- LimitNOFILE=
- [Install]
- WantedBy=multi-user.target
- [root@linux-node1 ssl]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.120:/usr/lib/systemd/system/kube-proxy.service
- kube-proxy.service % .4KB/s :
- [root@linux-node1 ssl]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.130:/usr/lib/systemd/system/kube-proxy.service
- kube-proxy.service % .9KB/s :
(8)启动Kubernetes Proxy
- [root@linux-node2 ~]# systemctl daemon-reload
- [root@linux-node2 ~]# systemctl enable kube-proxy
- [root@linux-node2 ~]# systemctl start kube-proxy
- [root@linux-node2 ~]# systemctl status kube-proxy
- [root@linux-node3 ~]# systemctl daemon-reload
- [root@linux-node3 ~]# systemctl enable kube-proxy
- [root@linux-node3 ~]# systemctl start kube-proxy
- [root@linux-node3 ~]# systemctl status kube-proxy
- 检查LVS状态,可以看到已经创建了一个LVS集群,将来自10.1.0.1:443的请求转到192.168.56.110:6443,而6443就是api-server的端口
- [root@linux-node2 ~]# ipvsadm -Ln
- IP Virtual Server version 1.2. (size=)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 10.1.0.1: rr persistent
- -> 192.168.56.110: Masq
- [root@linux-node3 ~]# ipvsadm -Ln
- IP Virtual Server version 1.2. (size=)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 10.1.0.1: rr persistent
- -> 192.168.56.110: Masq
- 如果你在两台实验机器都安装了kubelet和proxy服务,使用下面的命令可以检查状态:
- [root@linux-node1 ssl]# kubectl get node
- NAME STATUS ROLES AGE VERSION
- 192.168.56.120 Ready <none> 22m v1.10.1
- 192.168.56.130 Ready <none> 3m v1.10.1
到此,K8S的集群就部署完毕,由于K8S本身不支持网络,需要借助第三方网络才能进行创建Pod,将在下一节学习Flannel网络为K8S提供网络支持。
(9)遇到的问题:kubelet无法启动,kubectl get node 提示:no resource found
- [root@linux-node1 ssl]# kubectl get node
- No resources found.
- [root@linux-node3 ~]# systemctl status kubelet
- ● kubelet.service - Kubernetes Kubelet
- Loaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled)
- Active: activating (auto-restart) (Result: exit-code) since Wed -- :: EDT; 1s ago
- Docs: https://github.com/GoogleCloudPlatform/kubernetes
- Process: ExecStart=/opt/kubernetes/bin/kubelet --address=192.168.56.130 --hostname-override=192.168.56.130 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --cert-dir=/opt/kubernetes/ssl --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/kubernetes/bin/cni --cluster-dns=10.1.0.2 --cluster-domain=cluster.local. --hairpin-mode hairpin-veth --allow-privileged=true --fail-swap-on=false --logtostderr=true --v= --logtostderr=false --log-dir=/opt/kubernetes/log (code=exited, status=)
- Main PID: (code=exited, status=)
- May :: linux-node3.example.com systemd[]: Unit kubelet.service entered failed state.
- May :: linux-node3.example.com systemd[]: kubelet.service failed.
- [root@linux-node3 ~]# tailf /var/log/messages
- ......
- May :: linux-node3 kubelet: F0530 ::24.134612 server.go:] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
提示kubelet使用的cgroup驱动类型和docker的cgroup驱动类型不一致。进行查看docker.service- [Unit]
- Description=Docker Application Container Engine
- Documentation=http://docs.docker.com
- After=network.target
- Wants=docker-storage-setup.service
- Requires=docker-cleanup.timer
- [Service]
- Type=notify
- NotifyAccess=all
- KillMode=process
- EnvironmentFile=-/etc/sysconfig/docker
- EnvironmentFile=-/etc/sysconfig/docker-storage
- EnvironmentFile=-/etc/sysconfig/docker-network
- Environment=GOTRACEBACK=crash
- Environment=DOCKER_HTTP_HOST_COMPAT=
- Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
- ExecStart=/usr/bin/dockerd-current \
- --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
- --default-runtime=docker-runc \
- --exec-opt native.cgroupdriver=systemd \ ###修改此处"systemd"为"cgroupfs"
- --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
- $OPTIONS \
- $DOCKER_STORAGE_OPTIONS \
- $DOCKER_NETWORK_OPTIONS \
- $ADD_REGISTRY \
- $BLOCK_REGISTRY \
- $INSECURE_REGISTRY
- ExecReload=/bin/kill -s HUP $MAINPID
- LimitNOFILE=
- LimitNPROC=
- LimitCORE=infinity
- TimeoutStartSec=
- Restart=on-abnormal
- MountFlags=slave
- [Install]
- WantedBy=multi-user.target
- [root@linux-node3 ~]# systemctl daemon-reload
- [root@linux-node3 ~]# systemctl restart docker.service
- [root@linux-node3 ~]# systemctl restart kubelet
Kubernetes学习之路(四)之Node节点二进制部署的更多相关文章
- Kubernetes学习之路(八)之Kubeadm部署集群
一.环境说明 节点名称 ip地址 部署说明 Pod 网段 Service网段 系统说明 k8s-master 192.168.56.11 docker.kubeadm.kubectl.kubelet ...
- Kubernetes学习之路(28)之镜像仓库Harbor部署
Harbor的部署 官方文档 Harbor有两种安装的方式: 在线安装:直接从Docker Hub下载Harbor的镜像,并启动. 离线安装:在官网上下载离线安装包其地址为:https://githu ...
- Kubernetes学习之路目录
Kubernetes基础篇 环境说明 版本说明 系统环境 Centos 7.2 Kubernetes版本 v1.11.2 Docker版本 v18.09 Kubernetes学习之路(一)之概念和架构 ...
- springboot 学习之路 5(打成war包部署tomcat)
目录:[持续更新.....] spring 部分常用注解 spring boot 学习之路1(简单入门) spring boot 学习之路2(注解介绍) spring boot 学习之路3( 集成my ...
- Kubernetes学习之路(十四)之服务发现Service
一.Service的概念 运行在Pod中的应用是向客户端提供服务的守护进程,比如,nginx.tomcat.etcd等等,它们都是受控于控制器的资源对象,存在生命周期,我们知道Pod资源对象在自愿或非 ...
- Kubernetes学习之路(三)之Mater节点二进制部署
K8S Mater节点部署 1.部署Kubernetes API服务部署 apiserver提供集群管理的REST API接口,包括认证授权.数据校验以及集群状态变更等. 只有API Server才能 ...
- Kubernetes学习之路(26)之kubeasz+ansible部署集群
目录 1.环境说明 2.准备工作 3.分步骤安装 3.1.创建证书和安装准备 3.2.安装etcd集群 3.3.安装docker 3.4.安装master节点 3.5.安装node节点 3.6.部署集 ...
- Kubernetes学习之路(二十)之K8S组件运行原理详解总结
目录 一.看图说K8S 二.K8S的概念和术语 三.K8S集群组件 1.Master组件 2.Node组件 3.核心附件 四.K8S的网络模型 五.Kubernetes的核心对象详解 1.Pod资源对 ...
- Kubernetes学习之路(五)之Flannel网络二进制部署和测试
一.K8S的ip地址 Node IP:节点设备的IP,如物理机,虚拟机等容器宿主的实际IP. Pod IP:Pod的IP地址,是根据docker0网络IP段进行分配的. Cluster IP:Serv ...
随机推荐
- iOS设计模式 - 备忘录
iOS设计模式 - 备忘录 原理图 说明 1. 在不破坏封装的情况下,捕获一个对象的内部状态,并在该对象之外保存这个状态,这样以后就可以将该对象恢复到原先保存的状态 2. 本人已经将创建状态与恢复状态 ...
- [翻译] CRPixellatedView-用CIPixellate滤镜动态渲染UIView
CRPixellatedView-用CIPixellate滤镜动态渲染UIView https://github.com/chroman/CRPixellatedView 本人测试的效果: Usage ...
- 裸机恢复 (BMR) 和系统状态恢复
DPM 将系统保护数据源视为两个组成部分 – 裸机恢复 (BMR) 和系统状态保护. BMR 涉及保护操作系统文件和重要卷上的所有数据,用户数据除外. 系统状态保护涉及保护操作系统文件. DPM 使用 ...
- linux中ftp提示--553 Could not create file
今天在阿里云的linux上搭建ftp服务的时候,搭建成功之后,上传文件时总提示553 Could not create file,找了半天原因,终于解决了 ftp主目录为/home/myftp /ho ...
- el表达式便利map集合
<c:forEach items="${b.goodMap}" var="entry" varStatus="status"> ...
- November 14th 2016 Week 47th Monday
There are far, far better things ahead than any we leave behind. 前方,有更美好的未来. Can I see those better ...
- Git忽略提交 .gitignore配置。自动生成IDE的.gitignore。解决gitignore不生效
语法 以”#”号开头表示注释: 以斜杠“/”开头表示目录: 以星号“*”通配多个字符: 以问号“?”通配单个字符 以方括号“[]”包含单个字符的匹配列表: 以叹号“!”表示不忽略(跟踪)匹配到的文件或 ...
- 索引&切片 切割split
索引 s[n] # 中括号里n为一个数字 切片 s[0:9] ...
- 1031. [JSOI2007]字符加密【后缀数组】
Description 喜欢钻研问题的JS同学,最近又迷上了对加密方法的思考.一天,他突然想出了一种他认为是终极的加密办法 :把需要加密的信息排成一圈,显然,它们有很多种不同的读法.例如下图,可以读作 ...
- 1060. [ZJOI2007]时态同步【树形DP】
Description 小Q在电子工艺实习课上学习焊接电路板.一块电路板由若干个元件组成,我们不妨称之为节点,并将其用数 字1,2,3….进行标号.电路板的各个节点由若干不相交的导线相连接,且对于电路 ...