使用 kubeadm 搭建 kubernetes1.10 集群
PS:所有节点安装之前记得先把镜像准备好,否者将无法启动,也不报错。
$ cat /etc/hosts
192.168.11.1 master
192.168.11.2 node
禁用防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld
禁用SELINUX:
$ setenforce 0
$ cat /etc/selinux/config
SELINUX=disabled
创建/etc/sysctl.d/k8s.conf
文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
执行如下命令使修改生效:
$ modprobe br_netfilter
$ sysctl -p /etc/sysctl.d/k8s.conf
镜像
如果你的节点上面有科学上网的工具,可以忽略这一步,我们需要提前将所需的gcr.io
上面的镜像下载到节点上面,当然前提条件是你已经成功安装了docker
。
master
节点,执行下面的命令:
docker pull registry-vpc.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0
docker pull registry-vpc.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.8
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.8
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.12
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
可以将上面的命令保存为一个shell
脚本,然后直接执行即可。这些镜像是在master
节点上需要使用到的镜像,一定要提前下载下来。
其他Node
,执行下面的命令:
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kubernetes-dashboard-amd64:v1.8.3
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-influxdb-amd64:v1.3.3
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-grafana-amd64:v4.4.3
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-amd64:v1.4.2
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2
上面的这些镜像是在Node
节点中需要用到的镜像,在join
节点之前也需要先下载到节点上面。
安装 kubeadm、kubelet、kubectl
在确保docker
安装完成后,上面的相关环境配置也完成了,对应所需要的镜像(如果可以科学上网可以跳过这一步)也下载完成了,现在我们就可以来安装kubeadm
了,我们这里是通过指定yum
源的方式来进行安装的:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
当然了,上面的yum
源也是需要科学上网的,如果不能科学上网的话,我们可以使用阿里云的源进行安装:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
目前阿里云的源最新版本已经是1.10版本,所以可以直接安装。yum
源配置完成后,执行安装命令即可:
$ yum makecache fast && yum install -y kubelet kubeadm kubectl
正常情况我们可以都能顺利安装完成上面的文件。
配置 kubelet
安装完成后,我们还需要对kubelet
进行配置,因为用yum
源的方式安装的kubelet
生成的配置文件将参数--cgroup-driver
改成了systemd
,而docker
的cgroup-driver
是cgroupfs
,这二者必须一致才行,我们可以通过docker info
命令查看:
$ docker info |grep Cgroup
Cgroup Driver: cgroupfs
修改文件kubelet
的配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
,将其中的KUBELET_CGROUP_ARGS
参数更改成cgroupfs
:
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
另外还有一个问题是关于交换分区的,之前我们在手动搭建高可用的kubernetes 集群一文中已经提到过,Kubernetes
从1.8开始要求关闭系统的 Swap ,如果不关闭,默认配置的kubelet
将无法启动,我们可以通过 kubelet 的启动参数--fail-swap-on=false
更改这个限制,所以我们需要在上面的配置文件中增加一项配置(在ExecStart
之前):
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
当然最好的还是将swap
给关掉,这样能提高kubelet
的性能。修改完成后,重新加载我们的配置文件即可:
$ systemctl daemon-reload
集群安装
初始化
到这里我们的准备工作就完成了,接下来我们就可以在master
节点上用kubeadm
命令来初始化我们的集群了:
$ kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.11.1
命令非常简单,就是kubeadm init
,后面的参数是需要安装的集群版本,因为我们这里选择flannel
作为 Pod 的网络插件,所以需要指定–pod-network-cidr=10.244.0.0/16
,然后是apiserver
的通信地址,这里就是我们master
节点的IP 地址。执行上面的命令,如果出现running with swap on is not supported. Please disable swap
之类的错误,则我们还需要增加一个参数–ignore-preflight-errors=Swap
来忽略swap
的错误提示信息:
$ kubeadm init \
--kubernetes-version=v1.10.0 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.11.1 \
--ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ydzs-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.151.30.57]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ydzs-master1] and IPs [10.151.30.57]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 22.007661 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ydzs-master1 as master by adding a label and a taint
[markmaster] Master ydzs-master1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 8xomlq.0cdf2pbvjs2gjho3
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.11.1:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368
上面的信息记录了kubeadm
初始化整个集群的过程,生成相关的各种证书、kubeconfig
文件、bootstraptoken
等等,后边是使用kubeadm join
往集群中添加节点时用到的命令,下面的命令是配置如何使用kubectl
访问集群的方式:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
最后给出了将节点加入集群的命令:
kubeadm join 192.168.11.1:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368
我们根据上面的提示配置好kubectl
后,就可以使用kubectl
来查看集群的信息了:
$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-8qygb8Hjxj-byhbRHawropk81LHNPqZCTePeWoZs3-g 1h system:bootstrap:8xomlq Approved,Issued
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ydzs-master1 Ready master 3h v1.10.0
如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:
$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/
安装 Pod Network
接下来我们来安装flannel
网络插件,很简单,和安装普通的POD
没什么两样:
$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds" created
另外需要注意的是如果你的节点有多个网卡的话,需要在kube-flannel.yml
中使用--iface
参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。flanneld
启动参数加上--iface=<iface-name>
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth0
安装完成后使用kubectl get pods
命令可以查看到我们集群中的组件运行状态,如果都是Running
状态的话,那么恭喜你,你的master
节点安装成功了。
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-ydzs-master1 1/1 Running 0 10m
kube-system kube-apiserver-ydzs-master1 1/1 Running 0 10m
kube-system kube-controller-manager-ydzs-master1 1/1 Running 0 10m
kube-system kube-dns-86f4d74b45-f5595 3/3 Running 0 10m
kube-system kube-flannel-ds-qxjs2 1/1 Running 0 1m
kube-system kube-proxy-vf5fg 1/1 Running 0 10m
kube-system kube-scheduler-ydzs-master1 1/1 Running 0 10m
kubeadm
初始化完成后,默认情况下Pod
是不会被调度到master
节点上的,所以现在还不能直接测试普通的Pod
,需要添加一个工作节点后才可以。
添加节点
同样的上面的环境配置、docker 安装、kubeadmin、kubelet、kubectl 这些都在Node(192.168.11.2)节点安装配置好过后,我们就可以直接在 Node 节点上执行kubeadm join
命令了(上面初始化的时候有),同样加上参数--ignore-preflight-errors=Swap
:
$ kubeadm join 192.168.11.1:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368 --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks.
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.151.30.57:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.11.1:6443"
[discovery] Requesting info from "https://10.151.30.57:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.151.30.57:6443"
[discovery] Successfully established connection with API Server "10.151.30.57:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
我们可以看到该节点已经加入到集群中去了,然后我们把master
节点的~/.kube/config
文件拷贝到当前节点对应的位置即可使用kubectl
命令行工具了。
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
evjfaxic Ready <none> 1h v1.10.0
ydzs-master1 Ready master 3h v1.10.0
创建个nginx pod测试一下
docker pull
registry.cn-hangzhou.aliyuncs.com/qinyujia-test/nginx
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-hangzhou.aliyuncs.com/qinyujia-test/nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
sessionAffinity: ClientIP
selector:
app: nginx
ports:
- port: 80
nodePort: 30080
kubectl create -f hello.yaml
使用 kubeadm 搭建 kubernetes1.10 集群的更多相关文章
- 使用kubeadm搭建kubernetes1.10集群 Posted on April 14, 2018
https://blog.qikqiak.com/post/use-kubeadm-install-kubernetes-1.10/ kubeadm是Kubernetes官方提供的用于快速安装 Kub ...
- 用kubeadm 搭建 高可用集群问题记录和复盘整个过程 - 通过journalctl -u kubelet.service命令来查看kubelet服务的日志
1.根据 https://github.com/cookeem/kubeadm-ha/blob/master/README_CN.md 去搭建ha集群,遇到几个问题: runtime networ ...
- 使用kubeadm 搭建高可用集群 多master
很快很简单 只要三分钟就能看完 三台服务器 k8s-vip 负载均衡器 k8s-master1 主节点一 k8s-master2 主节点一 官方文档 首先搭建负载均衡器 用的Haproxy yum ...
- 10分钟搭建Kubernetes容器集群平台(kubeadm)
官方提供Kubernetes部署3种方式 minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环境 ...
- 10分钟搭建Kubernetes容器集群平台【转】
官方提供3种方式部署Kubernetes minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用.不能用于生产环境 ...
- Ubuntu16.04安装kubernetes1.13集群
Ubuntu16.04安装kubernetes1.13集群 最新的安装可以使用以下方式:https://www.cnrancher.com/docs/rancher/v2.x/cn/overview/ ...
- Kubernetes探索学习001--Centos7.6使用kubeadm快速部署Kubernetes集群
Centos7.6使用kubeadm快速部署kubernetes集群 为什么要使用kubeadm来部署kubernetes?因为kubeadm是kubernetes原生的部署工具,简单快捷方便,便于新 ...
- kubernetes kubeadm部署高可用集群
k8s kubeadm部署高可用集群 kubeadm是官方推出的部署工具,旨在降低kubernetes使用门槛与提高集群部署的便捷性. 同时越来越多的官方文档,围绕kubernetes容器化部署为环境 ...
- 虚拟机搭建Zookeeper服务器集群完整笔记
虚拟机搭建Zookeeper服务器集群完整笔记 本笔记主要记录自己搭建Zookeeper服务器的全过程,默认已经安装部署好Centos7. 一.虚拟机下Centos无法联网解决方案 1.首先调整虚拟机 ...
随机推荐
- Proxy 代理
意图 为其他对象提供一种代理以控制对这个对象的访问 动机 对一个对象进行访问控制的原因是为了只有在我们确实需要这个对象时才对它进行创建和初始化 典型例子:智能指针的实现,通过引用计数来决定“=” 复制 ...
- 常用的PHP排序算法以及应用场景
更多php排序算法应用常景:http://www.bf361.com/algorithm/algorithm-php 1.冒泡排序 冒泡排序:冒泡排序(Bubble Sort),是一种计算机科学领域 ...
- 批量ssh登录,获取操作系统、CPU、内存、硬盘信息<shell>
说明:该脚本读取machine.txt文件中的机器名,然后批量ssh登录,获取每台机器的操作系统,CPU,内存,硬盘等信息. 使用方法:将文件保存为sh,chmod +x filename 为该sh文 ...
- 【基础知识】.Net基础加强 第四天
一. 显示实现接口 1. 显示实现接口的目的:为了解决法方法重名的问题. 2. 显示实现接口必须是私有的,不能用public 3. (复习)类中成员不写访问修饰符默认是private:类如果不写访问修 ...
- 为什么要使用String
最近在培训课期间指导初学者.任务之一就是要大家完成一个类,要求这个类对key为String类型的map执行dwarwle操作.其中一位学员完成的类中,有如下方法: void dwarwle(HashM ...
- ICMP隧道工具ptunnel
ICMP隧道工具ptunnel 在一些网络环境中,如果不经过认证,TCP和UDP数据包都会被拦截.如果用户可以ping通远程计算机,就可以尝试建立ICMP隧道,将TCP数据通过该隧道发送,实现不受 ...
- python opencv3 读写图像文件
git: https://github.com/linyi0604/Computer-Vision # coding:utf8 import cv2 # 读取一张图片, 第二个参数可选 image = ...
- 【SPFA】POJ1860-Currency Exchange
[题目大意] 给出每两种货币之间交换的手续费和汇率,求出从当前货币s开始交换,能否赚. [思路] 反向运用SPFA,判断是否有正环.每次队首元素出队之后,判断一下到源点s的距离是否增大,增大则返回tr ...
- Express浅析
一.Express是什么? 首先Express是一个Node.js的Web框架,它的API使用各种HTTP实用程序方法和中间件,快速方便地创建强大的API,性能方面Express提供精简的基本web应 ...
- phpunit Cannot redeclare class PHPUnit_Runner_Version