Kubernetes体系结构
Nodes
What is a node?
A node
is a worker machine in Kubernetes, previously known as a minion
.
A node may be a VM or physical machine, depending on the cluster.
Each node has the services necessary to run podsand is managed by the master components.
The services on a node include Docker, kubelet and kube-proxy.
See The Kubernetes Node section in the architecture design doc for more details.
Node Status
A node’s status contains the following information:
Addresses
The usage of these fields varies depending on your cloud provider or bare metal configuration.
- HostName: The hostname as reported by the node’s kernel. Can be overridden via the kubelet
--hostname-override
parameter. - ExternalIP: Typically the IP address of the node that is externally routable (available from outside the cluster).
- InternalIP: Typically the IP address of the node that is routable only within the cluster.
Phase
Deprecated: node phase is no longer used.
Condition
The conditions
field describes the status of all Running
nodes.
Node Condition | Description |
---|---|
OutOfDisk |
True if there is insufficient free space on the node for adding new pods, otherwise False |
Ready |
True if the node is healthy and ready to accept pods, False if the node is not healthy and is not accepting pods, and Unknown if the node controller has not heard from the node in the last 40 seconds |
MemoryPressure |
True if pressure exists on the node memory – that is, if the node memory is low; otherwise False |
DiskPressure |
True if pressure exists on the disk size – that is, if the disk capacity is low; otherwise False |
NetworkUnavailable |
True if the network for the node is not correctly configured, otherwise False |
The node condition is represented as a JSON object.
For example, the following response describes a healthy node.
"conditions": [
{
"kind": "Ready",
"status": "True"
}
]
If the Status of the Ready condition is “Unknown” or “False” for longer than the pod-eviction-timeout
, an argument is passed to the kube-controller-manager and all of the Pods on the node are scheduled for deletion by the Node Controller.
The default eviction timeout duration is five minutes.
In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on it. The decision to delete the pods cannot be communicated to the kubelet until it re-establishes communication with the apiserver.
In the meantime, the pods which are scheduled for deletion may continue to run on the partitioned node.
In versions of Kubernetes prior to 1.5, the node controller would force delete these unreachable pods from the apiserver.
However, in 1.5 and higher, the node controller does not force delete pods until it is confirmed that they have stopped running in the cluster.
One can see these pods which may be running on an unreachable node as being in the “Terminating” or “Unknown” states. In cases where Kubernetes cannot deduce from the underlying infrastructure if a node has permanently left a cluster, the cluster administrator may need to delete the node object by hand.
Deleting the node object from Kubernetes causes all the Pod objects running on it to be deleted from the apiserver, freeing up their names.
Version 1.8 introduces an alpha feature that automatically creates taints that represent conditions.
To enable this behavior, pass an additional feature gate flag --feature-gates=...,TaintNodesByCondition=true
to the API server, controller manager, and scheduler. When TaintNodesByCondition
is enabled, the scheduler ignores conditions when considering a Node; instead it looks at the Node’s taints and a Pod’s tolerations.
Now users can choose between the old scheduling model and a new, more flexible scheduling model.
A Pod that does not have any tolerations gets scheduled according to the old model. But a Pod that tolerates the taints of a particular Node can be scheduled on that Node.
Note that because of small delay, usually less than one second, between time when condition is observed and a taint is created, it’s possible that enabling this feature will slightly increase number of Pods that are successfully scheduled but rejected by the kubelet.
Capacity
Describes the resources available on the node: CPU, memory and the maximum number of pods that can be scheduled onto the node.
Info
General information about the node,
such as kernel version, Kubernetes version (kubelet and kube-proxy version), Docker version (if used), OS name.
The information is gathered by Kubelet from the node.
Management
Unlike pods and services, a node is not inherently created by Kubernetes:
it is created externally by cloud providers like Google Compute Engine, or exists in your pool of physical or virtual machines.
What this means is that when Kubernetes creates a node, it is really just creating an object that represents the node. After creation, Kubernetes will check whether the node is valid or not.
For example, if you try to create a node from the following content:
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "10.240.79.157",
"labels": {
"name": "my-first-k8s-node"
}
}
}
Kubernetes will create a node object internally (the representation), and validate the node by health checking based on the metadata.name
field (we assume metadata.name
can be resolved).
If the node is valid, i.e. all necessary services are running, it is eligible to run a pod;
otherwise, it will be ignored for any cluster activity until it becomes valid.
Note that Kubernetes will keep the object for the invalid node unless it is explicitly deleted by the client, and it will keep checking to see if it becomes valid.
Currently, there are three components that interact with the Kubernetes node interface: node controller, kubelet, and kubectl.
Node Controller
The node controller is a Kubernetes master component which manages various aspects of nodes.
The node controller has multiple roles in a node’s life.
1.assigning a CIDR block to the node when it is registered (if CIDR assignment is turned on).
2.keeping the node controller’s internal list of nodes up to date with the cloud provider’s list of available machines.
When running in a cloud environment, whenever a node is unhealthy, the node controller asks the cloud provider if the VM for that node is still available. If not, the node controller deletes the node from its list of nodes.
3. monitoring the nodes’ health.
The node controller is responsible for updating the NodeReady condition of NodeStatus to Condition Unknown when a node becomes unreachable (i.e. the node controller stops receiving heartbeats for some reason, e.g. due to the node being down),
and then later evicting all the pods from the node (using graceful termination) if the node continues to be unreachable. (The default timeouts are 40s to start reporting Condition Unknown and 5m after that to start evicting pods.)
The node controller checks the state of each node every --node-monitor-period
seconds.
In Kubernetes 1.4, we updated the logic of the node controller to better handle cases when a large number of nodes have problems with reaching the master (e.g. because the master has networking problem).
Starting with 1.4, the node controller will look at the state of all nodes in the cluster when making a decision about pod eviction.
In most cases, node controller limits the eviction rate to --node-eviction-rate
(default 0.1) per second, meaning it won’t evict pods from more than 1 node per 10 seconds.
The node eviction behavior changes when a node in a given availability zone becomes unhealthy.
The node controller checks what percentage of nodes in the zone are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at the same time. If the fraction of unhealthy nodes is at least --unhealthy-zone-threshold
(default 0.55) then the eviction rate is reduced:
if the cluster is small (i.e. has less than or equal to --large-cluster-size-threshold
nodes - default 50) then evictions are stopped, otherwise the eviction rate is reduced to --secondary-node-eviction-rate
(default 0.01) per second.
The reason these policies are implemented per availability zone is because one availability zone might become partitioned from the master while the others remain connected.
If your cluster does not span multiple cloud provider availability zones, then there is only one availability zone (the whole cluster).
A key reason for spreading your nodes across availability zones is so that the workload can be shifted to healthy zones when one entire zone goes down.
Therefore, if all nodes in a zone are unhealthy then node controller evicts at the normal rate --node-eviction-rate
.
The corner case is when all zones are completely unhealthy (i.e. there are no healthy nodes in the cluster). In such case, the node controller assumes that there’s some problem with master connectivity and stops all evictions until some connectivity is restored.
Starting in Kubernetes 1.6, the NodeController is also responsible for evicting pods that are running on nodes with NoExecute
taints, when the pods do not tolerate the taints.
Additionally, as an alpha feature that is disabled by default, the NodeController is responsible for adding taints corresponding to node problems like node unreachable or not ready.
See this documentation for details about NoExecute
taints and the alpha feature.
Starting in version 1.8, the node controller can be made responsible for creating taints that represent Node conditions. This is an alpha feature of version 1.8.
Self-Registration of Nodes
When the kubelet flag --register-node
is true (the default), the kubelet will attempt to register itself with the API server.
This is the preferred pattern, used by most distros.
For self-registration, the kubelet is started with the following options:
--kubeconfig
- Path to credentials to authenticate itself to the apiserver.--cloud-provider
- How to talk to a cloud provider to read metadata about itself.--register-node
- Automatically register with the API server.--register-with-taints
- Register the node with the given list of taints (comma separated<key>=<value>:<effect>
). No-op ifregister-node
is false.--node-ip
- IP address of the node.--node-labels
- Labels to add when registering the node in the cluster.--node-status-update-frequency
- Specifies how often kubelet posts node status to master.
Currently, any kubelet is authorized to create/modify any node resource, but in practice it only creates/modifies its own.
(In the future, we plan to only allow a kubelet to modify its own node resource.)
Manual Node Administration
A cluster administrator can create and modify node objects.
If the administrator wishes to create node objects manually, set the kubelet flag --register-node=false
.
The administrator can modify node resources (regardless of the setting of --register-node
).
Modifications include setting labels on the node and marking it unschedulable.
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling, e.g. to constrain a pod to only be eligible to run on a subset of the nodes.
Marking a node as unschedulable will prevent new pods from being scheduled to that node, but will not affect any existing pods on the node.
This is useful as a preparatory step before a node reboot, etc.
For example, to mark a node unschedulable, run this command:
kubectl cordon $NODENAME
Note that pods which are created by a DaemonSet controller by pass the Kubernetes scheduler, and do not respect the unschedulable attribute on a node.
The assumption is that daemons belong on the machine even if it is being drained of applications in preparation for a reboot.
Node capacity
The capacity of the node (number of cpus and amount of memory) is part of the node object.
Normally, nodes register themselves and report their capacity when creating the node object.
If you are doing manual node administration, then you need to set node capacity when adding a node.
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node.
It checks that the sum of the requests of containers on the node is no greater than the node capacity.
It includes all containers started by the kubelet, but not containers started directly by Docker nor processes not in containers.
If you want to explicitly reserve resources for non-pod processes, you can create a placeholder pod.
Use the following template:
apiVersion: v1
kind: Pod
metadata:
name: resource-reserver
spec:
containers:
- name: sleep-forever
image: k8s.gcr.io/pause:0.8.0
resources:
requests:
cpu: 100m
memory: 100Mi
Set the cpu
and memory
values to the amount of resources you want to reserve.
Place the file in the manifest directory (--config=DIR
flag of kubelet).
Do this on each kubelet where you want to reserve resources.
API Object
Node is a top-level resource in the Kubernetes REST API. More details about the API object can be found at: Node API object.
Master-Node communication
allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider).
Cluster -> Master
ll communication paths from the cluster to the master terminate at the apiserver (none of the other master components are designed to expose remote services).
In a typical deployment, the apiserver is configured to listen for remote connections on a secure HTTPS port (443) with one or more forms of client authentication enabled.
One or more forms of authorization should be enabled, especially if anonymous requests or service account tokens are allowed.
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials.
For example, on a default GCE deployment, the client credentials provided to the kubelet are in the form of a client certificate.
See kubelet TLS bootstrapping for automated provisioning of kubelet client certificates.
Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated.
The kubernetes
service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
The master components communicate with the cluster apiserver over the insecure (not encrypted or authenticated) port.
This port is typically only exposed on the localhost interface of the master machine, so that the master components, all running on the same machine, can communicate with the cluster apiserver.
Over time, the master components will be migrated to use the secure port with authentication and authorization (see #13598).
As a result, the default operating mode for connections from the cluster (nodes and pods running on the nodes) to the master is secured by default and can run over untrusted and/or public networks.
Master -> Cluster
There are two primary communication paths from the master (apiserver) to the cluster.
1.The first is from the apiserver to the kubelet process which runs on each node in the cluster.
2.The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.
apiserver -> kubelet
The connections from the apiserver to the kubelet are used for:
- Fetching logs for pods.
- Attaching (through kubectl) to running pods.
- Providing the kubelet’s port-forwarding functionality.
These connections terminate at the kubelet’s HTTPS endpoint.
By default, the apiserver does not verify the kubelet’s serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks.
To verify this connection, use the --kubelet-certificate-authority
flag to provide the apiserver with a root certificate bundle to use to verify the kubelet’s serving certificate.
If that is not possible, use SSH tunneling between the apiserver and kubelet if required to avoid connecting over an untrusted or public network.
Finally, Kubelet authentication and/or authorization should be enabled to secure the kubelet API.
apiserver -> nodes, pods, and services
The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted.
They can be run over a secure HTTPS connection by prefixing https:
to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials so while the connection will be encrypted, it will not provide any guarantees of integrity.
These connections are not currently safe to run over untrusted and/or public networks.
SSH Tunnels
Google Kubernetes Engine uses SSH tunnels to protect the Master -> Cluster communication paths.
In this configuration, the apiserver initiates an SSH tunnel to each node in the cluster (connecting to the ssh server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel.
This tunnel ensures that the traffic is not exposed outside of the private GCE network in which the cluster is running.
Concepts Underlying the Cloud Controller Manager
Cloud Controller Manager
The cloud controller manager (CCM) concept (not to be confused with the binary) was originally created to allow cloud specific vendor code and the Kubernetes core to evolve independent of one another.
The cloud controller manager runs alongside other master components such as the Kubernetes controller manager, the API server, and scheduler. It can also be started as a Kubernetes addon, in which case, it runs on top of Kubernetes.
The cloud controller manager’s design is based on a plugin mechanism that allows new cloud providers to integrate with Kubernetes easily by using plugins.
There are plans in place for on-boarding new cloud providers on Kubernetes, and for migrating cloud provider from the old model to the new CCM model.
This document discusses the concepts behind the the cloud controller manager, and gives details about its associated functions.
Design
In the preceding diagram, Kubernetes and the cloud provider are integrated through several different components:
- Kubelet
- Kubernetes controller manager
- Kubernetes API server
The CCM consolidates all of the cloud-dependent logic from the preceding three components to create a single point of integration with the cloud. The new architecture with the CCM looks like this:
Components of the CCM
The CCM breaks away some of the functionality of Kubernetes controller manager (KCM) and runs it as a separate process. Specifically, it breaks away those controllers in the KCM that are cloud dependent. The KCM has the following cloud dependent controller loops:
- Node controller
- Volume controller
- Route controller
- Service controller
In version 1.8, the CCM currently runs the following controllers from the preceding list:
- Node controller
- Route controller
- Service controller
Additionally, it runs another controller called the PersistentVolumeLabels controller. This controller is responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds.
Volume controller was deliberately chosen to not be a part of CCM.
Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM.
The original plan to support volumes using CCM was to use Flex volumes to support pluggable volumes. However, a competing effort known as CSI is being planned to replace Flex.
Considering these dynamics, we decided to have an intermediate stop gap measure until CSI becomes ready.
Work is in progress by the cloud provider working group (wg-cloud-provider) to enable PersistentVolume support using CCM. See kubernetes/kubernetes#52371.
Functions of the CCM
The CCM inherits its functions from components of Kubernetes that are dependent on a cloud provider. This section is structured based on the components from which CCM inherits its functions.
1. Kubernetes controller manager
The majority of the CCM’s functions are derived from the KCM. As mentioned in the previous section, the CCM runs the following control loops:
- Node controller
- Route controller
- Service controller
- PersistentVolumeLabels controller
Node controller
The Node controller is responsible for initializing a node by obtaining information about the nodes running in the cluster from the cloud provider.
The node controller performs the following functions:
- Initialize a node with cloud specific zone/region labels.
- Initialize a node with cloud specific instance details, for example, type and size.
- Obtain the node’s network addresses and hostname.
- In case a node becomes unresponsive, check the cloud to see if the node has been deleted from the cloud. If the node has been deleted from the cloud, delete the Kubernetes Node object.
Route controller
The Route controller is responsible for configuring routes in the cloud appropriately so that containers on different nodes in the Kubernetes cluster can communicate with each other.
The route controller is only applicable for Google Compute Engine clusters.
Service Controller
The Service controller is responsible for listening to service create, update, and delete events.
Based on the current state of the services in Kubernetes, it configures cloud load balancers (such as ELB, or Google LB) to reflect the state of the services in Kubernetes.
Additionally, it ensures that service backends for cloud load balancers are up to date.
PersistentVolumeLabels controller
The PersistentVolumeLabels controller applies labels on AWS EBS, GCE PD volumes when they are created.
This removes the need for users to manually set the labels on these volumes.
These labels are essential for the scheduling of pods, as these volumes are constrained to work only within the region/zone that they are in, and therefore any Pod using these volumes needs to be scheduled in the same region/zone.
The PersistentVolumeLabels controller was created specifically for the CCM; that is, it did not exist before the CCM was created.
This was done to move the PV labelling logic in the Kubernetes API server (it was an admission controller) to the CCM.
It does not run on the KCM.
2. Kubelet
The Node controller contains the cloud-dependent functionality of the kubelet.
Prior to the introduction of the CCM, the kubelet was responsible for initializing a node with cloud-specific details such as IP addresses, region/zone labels and instance type information. The introduction of the CCM has moved this initialization operation from the kubelet into the CCM.
In this new model, the kubelet initializes a node without cloud-specific information.
However, it adds a taint to the newly created node that makes the node unschedulable until the CCM initializes the node with cloud-specific information, and then removes this taint.
3. Kubernetes API server
The PersistentVolumeLabels controller moves the cloud-dependent functionality of the Kubernetes API server to the CCM as described in the preceding sections.
Plugin mechanism
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in.
Specifically, it uses the CloudProvider Interface defined here
The implementation of the four shared controllers highlighted above, and some scaffolding along with the shared cloudprovider interface, will stay in the Kubernetes core, but implementations specific to cloud providers will be built outside of the core, and implement interfaces defined in the core.
For more information about developing plugins, see Developing Cloud Controller Manager.
Authorization
This section breaks down the access required on various API objects by the CCM to perform its operations.
Node Controller
The Node controller only works with Node objects. It requires full access to get, list, create, update, patch, watch, and delete Node objects.
v1/Node:
- Get
- List
- Create
- Update
- Patch
- Watch
Route controller
The route controller listens to Node object creation and configures routes appropriately. It requires get access to Node objects.
v1/Node:
- Get
Service controller
The service controller listens to Service object create, update and delete events and then configures endpoints for those Services appropriately.
To access Services, it requires list, and watch access.
To update Services, it requires patch and update access.
To set up endpoints for the Services, it requires access to create, list, get, watch, and update.
v1/Service:
- List
- Get
- Watch
- Patch
- Update
PersistentVolumeLabels controller
The PersistentVolumeLabels controller listens on PersistentVolume (PV) create events and then updates them.
This controller requires access to list, watch, get and update PVs.
v1/PersistentVolume:
- Get
- List
- Watch
- Update
Others
The implementation of the core of CCM requires access to create events, and to ensure secure operation, it requires access to create ServiceAccounts.
v1/Event:
- Create
- Patch
- Update
v1/ServiceAccount:
- Create
The RBAC ClusterRole for the CCM looks like this:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cloud-controller-manager
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- ""
resources:
- services
verbs:
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- list
- update
- watch
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- list
- watch
- update
Vendor Implementations
The following cloud providers have implemented CCMs for their own clouds.
Cluster Administration
Kubernetes体系结构的更多相关文章
- [转帖]Kubernetes及容器编排的总体介绍【译】
Kubernetes及容器编排的总体介绍[译] 翻译自The New Stack<Kubernetes 生态环境>作者:JANAKIRAM MSV和 KRISHNAN SUBRAMANIA ...
- 图解 Kubernetes
容器 在了解 Kubernetes 之前,让我们先了解一个容器. 因为如果不了解容器就没法聊容器编排. 容器就是...一个你塞入所有材料的容器. "材料"是指你的应用代码.依赖库, ...
- kubernetes之监控Prometheus实战--prometheus介绍--获取监控(一)
Prometheus介绍 Prometheus是一个最初在SoundCloud上构建的开源监控系统 .它现在是一个独立的开源项目,为了强调这一点,并说明项目的治理结构,Prometheus 于2016 ...
- 技术进阶:Kubernetes高级架构与应用状态部署
在了解Kubernetes应用状态部署前,我们先看看Kubernetes的高级架构,方便更好的理解Kubernetes的状态. Kubernetes 的高级架构 包括应用程序部署模型,服务发现和负载均 ...
- ECS vs. Kubernetes 类似而又不同
C2Container Service (ECS)和Kubernetes (K8s) 都解决了同样的问题:跨越主机集群管理容器.ECS和Kubernetes之间的斗争让我想起了vi和Emacs之间的编 ...
- 《.NET 微服务:适用于容器化 .NET 应用的体系结构》关键结论
作为总结和要点,以下是本指南中最重要的结论.1 使用容器的好处: 基于容器的解决方案有节约成本的好处,因为容器是针对生产环境中缺少依赖而导致的部署问题提出的解决方案.容器能够显著改善devops和生产 ...
- Kubernetes 下零信任安全架构分析
点击下载<不一样的 双11 技术:阿里巴巴经济体云原生实践> 本文节选自<不一样的 双11 技术:阿里巴巴经济体云原生实践>一书,点击上方图片即可下载! 作者 杨宁(麟童) 阿 ...
- K8S 1.12大特性最快最深度解析:Kubernetes CSI Snapshot(上)
背景 许多存储系统提供了创建存储卷“快照”(snapshot)的能力,以防止数据丢失.快照可以替代传统的备份系统来备份和还原主要数据和关键数据.快照能够快速备份数据(例如,创建GCE PD快照仅需 ...
- KubeEdge,一个Kubernetes原生边缘计算框架
KubeEdge成为第一个Kubernetes原生边缘计算平台,Edge和云组件现已开源! 开源边缘计算正在经历其业界最具活力的发展阶段.如此多的开源平台,如此多的整合以及如此多的标准化举措!这显示 ...
随机推荐
- 【2017-03-02】C#函数,递归法
函数 函数的意义:独立完成某项功能的个体 函数的优势:1.提高代码的重用性 2.提高功能开发的效率 3.提高程序代码的可维护性 函数四要素: 1,输入:(值的类型+名称) 2,输出:ret ...
- Java 内存分配
静态储存区:全局变量,static 内存在编译的时候就已经分配好了,并且这块内存在程序运行期间都存在. 栈储存区:1,局部变量.2,,保存类的实例,即堆区对象的引用.也可以用来保存加载方法时的帧.函数 ...
- vue-i18n国际化插件
vue-i18n国际化插件 安装,到项目目录下执行:npm install vue-i18n 配置在src\main.js里面引入vue-i18n // 语言包插件import VueI18n fro ...
- CSS, JavaScript 压缩, 美化, 加密, 解密
CSS, JavaScript 压缩, 美化, 加密, 解密 JS压缩, CSS压缩, javascript compress, js在线压缩,javascript在线压缩,css在线压缩,YUI C ...
- PHP框架CI CodeIgniter 的log_message开启日志记录方法
PHP框架CI CodeIgniter 的log_message开启日志记录方法 第一步:index.php文件,修改环境为开发环境define(‘ENVIRONMENT’, ‘development ...
- read 命令详解
read 命令从标准输入中读取一行,并把输入行的每个字段的值指定给 shell 变量 语法选项 -p read –p “提示语句”,则屏幕就会输出提示语句,等待输入,并将输入存储在REPLY中 -n ...
- animate和translate
transition, transform, tanslate,animation分别为过渡,变换,平移.动画.transform的属性包括:rotate() / skew() / scale() / ...
- 怎样从外网访问内网SQLServer数据库?
本地安装了一个SQLServer数据库,只能在局域网内访问到,怎样从外网也能访问到本地的SQLServer数据库呢?本文将介绍具体的实现步骤. 1. 准备工作 1.1 安装并启动SQLServer数据 ...
- 数据分析之pandas02
DataFrame 一.DataFrame DataFrame是一个[表格型]的数据结构.DataFrame又按一定顺序排列的多列数据组成,设计初衷是将Series的使用场景从一维扩展到多维.Date ...
- 穿透内网,连接动态ip,内网ip打洞-----p2p实现原理(转)
源: 穿透内网,连接动态ip,内网ip打洞-----p2p实现原理