准备工作:

时间同步

systemctl stop iptables.service
systemctl stop firewalld.service

安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce-18.09*

systemctl start docker
systemctl enable docker

vim /etc/docker/daemon.json

{
"registry-mirrors": ["https://3s01e0d2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}

systemctl daemon-reload
systemctl restart docker

1.配置kubernetes yum源。
阿里镜像源:https://opsx.alibaba.com/mirror

配置yum源:
[kubernetes]
name=kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm --import rpm-package-key.gpg

master
2.安装:
yum install kubelet-1.14.3  kubeadm-1.14.3  kubectl-1.14.3 docker-ce-18.09*

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

kubeadm 初始化

忽略swap
vim /etc/sysconfig/kubelet
[root@heaven00 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

初试化命令:

kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

镜像仓库:
quay.io

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.12.2
docker pull mirrorgooglecontainers/kube-controller-amd64:v1.12.2
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.12.2
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.12.2
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.12.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull mirrorgooglecontainers/coredns:1.2.2
docker pull mirrorgooglecontainers/coredns-amd64:1.2.2
docker pull coredns/coredns:1.2.2

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.250.0.89:6443 --token aji2ef.f103v7o45h7hjeld \
--discovery-token-ca-cert-hash sha256:fd01c8ced3c470d3d1bf35350c8cb8bbba82fa808f14573654c9ffaf4e29fca6

查看状态:
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 18m v1.14.3
[root@k8s-master ~]#

安装网络组建flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-8686dcc4fd-27wg7 1/1 Running 0 26m
coredns-8686dcc4fd-ks8hc 1/1 Running 0 26m
etcd-k8s-master 1/1 Running 0 25m
kube-apiserver-k8s-master 1/1 Running 0 25m
kube-controller-manager-k8s-master 1/1 Running 0 26m
kube-flannel-ds-amd64-kgfwd 1/1 Running 0 2m15s
kube-proxy-js82w 1/1 Running 0 26m
kube-scheduler-k8s-master 1/1 Running 0 25m
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 27m v1.14.3

===================================================================================
node 节点:

yum install kubelet-1.14.3  kubeadm-1.14.3  kubectl-1.14.3 docker-ce-18.09*

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 73m v1.14.3
k8s-node1 Ready <none> 2m25s v1.14.3

==================================================
部署过程中需要拉取镜像国内非常慢

有大佬写了个工具,亲测好使。
https://github.com/xuxinkun/littleTools#azk8spull
https://www.cnblogs.com/xuxinkun/p/11025020.html

=================================
报错1:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

解决:
修改/etc/docker/daemon.json文件

{
"exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker

systemctl daemon-reload
systemctl restart docker

报错2:k8s.gcr.io连接不上之后,会报如下的错误:

初始化k8s时候,

[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1

解决方法:
通过别的镜像仓库,先将镜像下载下来然后再修改成k8s.gcr.io对应的tag

pull镜像
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1

修改tag:
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

=====================================
问题3:
[root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@heaven00 lib]# rm -rf /etc/kubernetes /var/lib/etcd -rf
[root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@heaven00 lib]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@heaven00 lib]# netstat -tulnp | grep 10250
tcp6 0 0 :::10250 :::* LISTEN 17007/kubelet
[root@heaven00 lib]# systemctl stop kubelet
[root@heaven00 lib]# netstat -tulnp | grep 10250
[root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [heaven00 localhost] and IPs [10.139.165.32 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [heaven00 localhost] and IPs [10.139.165.32 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [heaven00 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.139.165.32]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.502305 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

message log:

Jun 20 12:04:33 heaven00 kubelet: E0620 12:04:33.878745 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:33 heaven00 kubelet: E0620 12:04:33.978932 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.035393 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.079094 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.179282 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.235901 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: W0620 12:04:34.274044 18155 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.279465 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.379615 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.435968 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.479820 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.580004 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.601351 18155 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.636902 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.680184 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.780347 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.836914 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.880525 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.980697 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.036733 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.080854 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.181033 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.237132 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.281221 18155 kubelet.go:2244] node "heaven00" not found
Jun 20 12:04:37 heaven00 telegraf: 2019-06-20T04:04:37Z E! [inputs.ping]: Error in plugin: host www.google.com: signal: killed
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25606_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25606_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25606_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25606_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25611_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25611_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25611_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25611_systemd_test_default.slice.
Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25629_systemd_test_default.slice.

##
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

解决方法:
找了新机器重新部署,发现没有这个问题了。

============================================================================================================
kubeadm init初始化成功后会打印出node 加入master的命令,如下:

kubeadm join 10.239.44.68:6443 --token 8jxvj4.5lop20zjbu48h6kl \
--discovery-token-ca-cert-hash sha256:1ca8f0a098601b94d7c2a9b4a3758ff0880a0213db813336dec0e9272ed55a78
注意:kubeadm init生成的token有效期只有1天,如果你的node节点在使用kubeadm join时出现如下错误

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
请到master上检查你所使用的token是否有效,kubeadm token list

49y4v3.jxq5w76jj5hh028u <invalid> 2019-04-13T15:00:47-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
8jxvj4.5lop20zjbu48h6kl 23h 2019-04-25T10:21:41-04:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
生成不过期的token

kubeadm token create --ttl 0 --print-join-command
============================================================================================================

2.使用kubeadm快速搭建k8s集群的更多相关文章

  1. kubeadm快速搭建k8s集群

    环境 master01:192.168.1.110 (最少2核CPU) node01:192.168.1.100 规划 services网络:10.96.0.0/12 pod网络:10.244.0.0 ...

  2. 教你用multipass快速搭建k8s集群

    目录 前言 一.multipass快速入门 安装 使用 二.使用multipass搭建k8s集群 创建3台虚拟机 安装master节点 安装node节点 测试k8s集群 三.其他问题 不能拉取镜像:报 ...

  3. 通过kubeadm快速部署K8S集群

    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: # 创建一个 Master 节点 $ kubeadm i ...

  4. centos环境 使用kubeadm快速安装k8s集群v1.16.2

    全程使用root用户运行,宿主机需要连接外网 浏览一下官方kubeadm[有些镜像用不了] https://kubernetes.io/docs/setup/production-environmen ...

  5. kubeadm方式搭建K8S集群

    一.kubeadm介绍 二.安装要求 三.集群规划 四.环境初始化(在每个服务器节点操作) 1.关闭防火墙 2.关闭selinux 3.关闭swap 4.根据规划设置主机名 5.在Master添加ho ...

  6. kubeadm 搭建 K8S集群

    kubeadm是K8s官方推荐的快速搭建K8s集群的方法. 环境: Ubuntu 16.04 1 安装docker Install Docker from Ubuntu’s repositories: ...

  7. 使用Kubeadm(1.13+)快速搭建Kubernetes集群

    Kubeadm是管理集群生命周期的重要工具,从创建到配置再到升级,Kubeadm处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心Kubernetes组件,以便为新节点提供安全而简单的连接流程并 ...

  8. kubeadm搭建K8s集群及Pod初体验

    基于Kubeadm 搭建K8s集群: 通过上一篇博客,我们已经基本了解了 k8s 的基本概念,也许你现在还是有些模糊,说真的我也是很模糊的.只有不断地操作去熟练,强化自己对他的认知,才能提升境界. 我 ...

  9. Kubernetes探索学习001--Centos7.6使用kubeadm快速部署Kubernetes集群

    Centos7.6使用kubeadm快速部署kubernetes集群 为什么要使用kubeadm来部署kubernetes?因为kubeadm是kubernetes原生的部署工具,简单快捷方便,便于新 ...

随机推荐

  1. GitHub代码复现之opencv

    GitHub代码复现之opencv链接:https://github.com/vonzhou/opencv 待解决!!! ISSUE汇总: Issue1:vs2015找不到配置dirent.h头文件? ...

  2. 前端基础(八):Font Awesome(图标)

    一.font awesome简介 目前图标总数共有519个; 不依赖Javascript 矢量图形,无限缩放 免费,可用于商业 CSS控制样式,自定义图标颜色,大小,阴影,一切可能实现的效果 支持re ...

  3. java.util.NoSuchElementException

    问题引入 Java商店作业不同函数里需要获取用户输入,用Scanner的时候,出现了异常java.util.NoSuchElementException 作业中代码模式如下,func1和func2中都 ...

  4. spring 的自动定时任务

    spring的自动定时任务有两种 第一种:通过xml配置来设置 需要在xml中引入新的约束,并且需要配置<task:scheduled-tasks> ,主要配置内容如下: <?xml ...

  5. Spring管理Hibernate事务

    在没有加入Spring来管理Hibernate事务之前,Hibernate对事务的管理的顺序是: 开始事务 提交事务 关闭事务 这样做的原因是Hibernate对事务默认是手动提交,如果不想手动提交, ...

  6. linux基础_用户组的管理

    1.创建组 语法:groupadd 组名 2.删除组 语法:groupdel 组名 3.创建用户时,直接指定组 语法:useradd -g 用户组 用户名 4.修改用户的组 语法:usermod -g ...

  7. 《深入理解Java虚拟机》之(一、内存区域)

    一.java的体系构成: Java的技术体系主要由支撑java程序运行的虚拟机.提供各种开发领域接口支持的java api.java编程语言及许多第三方java框架(如Spring .Struts等) ...

  8. UVALive 6862——结论题&&水题

    题目 链接 题意:求满足$0 \leq x \leq y \leq z \leq m$且$x^j + y^j = z^j, \ j=2 \cdots n$的三元组的对数 分析 由费马大定理:整数$n ...

  9. HDU-4847-Wow!Such Doge!(嘿嘿嘿, 水题)

    链接: http://acm.hdu.edu.cn/showproblem.php?pid=4847 题意: Now, Doge wants to know how many words " ...

  10. 3、docker常用命令:help、镜像命令、容器命令

    1.帮助命令 1.docker version 2.docker info 3.重点掌握:docker --help 2.镜像命令 1.docker,镜像,容器关系 2.docker images ( ...