https://www.joyent.com/blog/triton-kubernetes-multicloud

While running an experimental Kubernetes cluster is fairly simple, operationalizing K8s environments for production is not for the faint of heart. Some of us are also looking to expand their K8s environments across multiple clouds, private or public, for purposes of scalability, workload distribution, disaster recovery, etc. However managing cross-cloud K8s environments from a single control plane is quite challenging and lacks mature out of box solutions.

In this blog post we introduce Triton Kubernetes, the first truly multi-cloud Kubernetes solution that we are working on here at Joyent, and walk through steps to get up and running with a unified set of kubernetes clusters across 4 different public clouds. We will also show how simple it is to deploy an app to environments managed by this multi-cloud K8s solution.

Please actively follow the project for updates on roadmap items like backup and DR, workload migration, federation, full stack monitoring, auto-scaling hosts, alerting and notifications, et all.

The current solution version supports the following public cloud: Triton, AWS, Azure, and GCP. Triton running on-premises as a private or hybrid clouds is also supported. Bare Metal support is limited at this time, but available through professional services engagements. In near future, we plan to add automation to run the solution seamlessly on bare metal servers.

Getting started in three simple steps:

You can get up and running with production grade kubernetes across multiple-clouds by following the steps listed below. Please see details in the Quickstart section below.

Step one: set up the pre-requisites

To start, you must create a Triton account and install the Triton CLITerraform, and the Kubernetes CLI. To install the solution across multiple clouds like AWS, Azure or GCP, you must already have or create accounts for these cloud providers respectively.

Triton is Joyent's hybrid and open source cloud and Terraform is an open source tool that enables you to safely and predictably create, change, and improve production infrastructure. We use Terraform to provision virtual machines, set up root access, and install python.

Step two: create a Global K8s Cluster Manager.

The next step will be to create a highly available Global Cluster Manager, which provides the cross-cloud control plane. This Manager provides scaling, push button upgrades, role based access control, ci/cd integration and monitoring of cluster health. Triton Kubernetes automates the provisioning of virtual machine hosts, Docker Engines, database server, networks and sets up the global cluster manager for you. By default, the installer deploys the control plane on Triton, however you can modify the underlying Terraform templates to install on any cloud.

Step three: create K8s Environmental Clusters.

The final step will be to provision Kubernetes environments on any region/datacenter on any cloud (having a set of published API endpoints) and have them managed by the global cluster manager. Triton Kubernetes does this for you as well. Each environmental cluster is self-sustaining with built-in features like high availability, auto-healing, clustering of etcd and orchestration services and the ability to run specific services on dedicated hosts. The installer automates the provisioning vms, docker, and networking based on the chosen cloud provider and leverages Rancher as the underlying middleware to provision production grade, supportable (and easily upgradable) Kubernetes Environments.

Quickstart

Follow along as we walk through these three easy steps in detail below. Complete them on your own, leveraging our free trial offer to get started on Triton. You will have your very own, 100% open source, production-grade, multi-cloud Kubernetes stack.

NOTE: You may encounter an error if you try to run this demo immediately after signing up for the free trial, as we automatically set provisioning limits. Request for your limit to be increased or remove any existing instances by contacting support.

Before you get going, you can also watch a brief demo:

Pre-requisites

Install Triton

In order to install Triton, you must have a Triton account. It's important that you have your billing information handy and add an ssh key to your account. If you need instructions for how to generate and SSH key, read our documentation.

  1. Install Node.js and run npm install -g triton to install Triton CLI.
  2. triton uses profiles to store access information. You'll need to set up profiles for relevant data centers.
    • triton profile create will give a step-by-step walkthrough of how to create a profile.
    • Choose a profile to use for your Triton Kubernetes setup.
  3. Get into the Triton environment with eval $(triton env <profile name>).
  4. Run triton info to test your configuration.

Install Terraform

Triton supports all recent versions of Terraform. Download Terraform and unzip the package.

Terraform runs as a single binary named terraform. The final step is to make sure that the terraform binary is available on the PATH. See this page for instructions on setting the PATH on Linux and Mac.

Test your installation by running terraform. You should see an output similar to:

$ terraform
Usage: terraform [--version] [--help] <command> [args] The available commands for execution are listed below.
The most common, useful commands are shown first, followed by
less common or more advanced commands. If you're just getting
started with Terraform, stick with the common commands. For the
other commands, please read the help and docs before usage. Common commands:
apply Builds or changes infrastructure
console Interactive console for Terraform interpolations # ...

terraform gets downloaded automatically if it isn't found in $PATH.

Install the Kubernetes CLI

This tool is not required to provision, but rather needed to connect to a provisioned kubernetes env. There are different ways to install kubectl, but the simplest way is via curl:

# OS X
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl # Linux
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl # Windows
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/windows/amd64/kubectl.exe

Create a Global K8s Cluster Manager

Triton Kubernetes uses triton, and terraform to set up a global cluster manager and kubernetes environmental clusters. Once those have been installed, you can download the Triton Kubernetespackage, run the script triton-kubernetes.sh -c and answer the prompted questions to start a cluster manager.

triton-kubernetes -c can be passed an optional conf file for a silent install.

Default values will be shown in parentheses and if no input is provided, defaults will be used.

$ git clone https://github.com/joyent/triton-kubernetes.git -b v0.8
Cloning into 'triton-kubernetes'...
remote: Counting objects: 574, done.
remote: Compressing objects: 100% (67/67), done.
remote: Total 574 (delta 43), reused 50 (delta 18), pack-reused 487
Receiving objects: 100% (574/574), 6.57 MiB | 2.46 MiB/s, done.
Resolving deltas: 100% (260/260), done.
Note: checking out 'c77a983d85bfa10033a18e1ad2b77fa76692caab'. You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> $ cd triton-kubernetes
$ ./triton-kubernetes.sh -c Downloading Terraform v0.10.8 ... Extracting Terraform executable
Using /Users/<username>/work/triton-kubernetes/bin/terraform ...

If terraform is not found in $PATH, it will be downloaded.

If a triton profile has been set (eval "$(triton env)"), you will not be prompted for triton account details.

Your Triton account login name: <username> (example fayazg)
The Triton CloudAPI endpoint URL:<region name> (example https://us-east-1.api.joyent.com)
Your Triton account key id: <keyid> (example 2c:53:bc:63:97:9e:79:3f:91:35:5e:f4:c8:23:89:37)

Global cluster managers run on Triton so the first few questions prompted will be account related. This is the same information provided by running triton profile get.

Name your Global Cluster Manager: (global-cluster) gclustermanager

Provide a name for the Global Cluster Manager and press Enter.

Do you want to set up the Global Cluster Manager in HA mode? (yes | no) yes

Global cluster manager can run in HA mode or non-HA configuration. In HA mode, there will be a two node cluster manager with a database as can be seen in the architectural diagram above.

Which Triton networks should be used for this environment: (Joyent-SDC-Public)

Triton CLI is used here to pull all the active networks (public, private, fabric - sdn) for the current data center defined in the Triton profile. Provide an option for which networks should the global cluster manager use. We are going to use the default so press Enter.

Which Triton package should be used for Global Cluster Manager server(s): (k4-highcpu-kvm-1.75G) k4-highcpu-kvm-3.75G
Which Triton package should be used for Global Cluster Manager database server: (k4-highcpu-kvm-1.75G) k4-highcpu-kvm-3.75G

Since the global cluster manager is set up in HA mode, Triton Kubernetes will prompt for package names to use for the two HA nodes and the database server. Here we are going to use the k4-highcpu-kvm-3.75G package for both. For a production install, you should pick instance sizes appropriate to the expected workload and may need bigger resource packages.

docker-engine install script: (https://releases.rancher.com/install-docker/1.12.sh)
############################################################ Cluster Manager gclustermanager will be created on Triton.
gclustermanager will be running in HA configuration and provision three Triton machines ...
gclustermanager-master-1 k4-highcpu-kvm-3.75G
gclustermanager-master-2 k4-highcpu-kvm-3.75G
gclustermanager-mysqldb k4-highcpu-kvm-3.75G Do you want to start the setup? (yes | no) yes

The last prompt before verification of inputs is the docker-engine installation script that should be used. Leave the default here and press Enter.

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command. State path: terraform.tfstate Outputs: masters = [
<cluster manager node1 ip>,
<cluster manager node2 ip>
] Cluster Manager gclustermanager has been started.
This is an HA Active/Active setup so you can use either of the IP addresses.
http://<cluster manager node1 ip>:8080/settings/env Next step is adding Kubernetes environments to be managed here.
To start your first environment, run:
./triton-kubernetes.sh -e

Once verification is finished, terraform will download the modules, initialize provider plugins, start provisioning three machines and configure a two node HA cluster manager. Next we will create two Kubernetes clusters (one on Triton and another on GCP), which will be managed by this global cluster manager.

Create a HA, 5 Node, K8s environmental cluster on Triton

Kubernetes environmental clusters in HA mode run their etcd and Kubernetes orchestration services(apiserver, scheduler, controller-manager...) in dedicated three node clusters. Once the global cluster manager is up and running, from the same directory, start triton-kubernetes.sh -e and answer the prompted questions to start provisioning the first environment.

Default values will be shown in parentheses and if no input is provided, defaults will be used.

$ ./triton-kubernetes.sh -e
Using triton-kubernetes/bin/terraform ... From clouds below:
1. Triton
2. AWS
3. Azure
4. GCP
Which cloud do you want to run your environment on: (1)

We want the first environmental cluster on Triton; so keep the default and press Enter.

Your Triton account login name: <username> (example fayazg)
The Triton CloudAPI endpoint URL: <region name> (example https://us-east-1.api.joyent.com)
Your Triton account key id: <key id> (example 2c:53:bc:63:97:9e:79:3f:91:35:5e:f4:c8:23:88:37)

This environment will be running on Triton Cloud so the first few questions prompted will be account related. This is the same information provided by running triton profile get.

Name your environment: (triton-test) devcluster
Do you want this environment to run in HA mode? (yes | no) yes

Provide a alphanumeric name for this environment and start it in HA mode.

Number of compute nodes for devcluster environment: (3) 5

Provide the number of worker nodes to create for this environment and press Enter.

Which Triton networks should be used for this environment: (Joyent-SDC-Public)

Leave the default value and press Enter.

Which Triton package should be used for devcluster environment etcd nodes: (k4-highcpu-kvm-1.75G) k4-highcpu-kvm-3.75G
Which Triton package should be used for devcluster environment orchestration nodes running apiserver/scheduler/controllermanager/...: (k4-highcpu-kvm-1.75G) k4-highcpu-kvm-3.75G
Which Triton package should be used for devcluster environment compute nodes: (k4-highcpu-kvm-1.75G) k4-highcpu-kvm-3.75G

Since this environment is going to run in HA mode, there will be three different type of VMs.

  • etcd nodes: dedicated nodes that will will run an etcd cluster
  • orchestration nodes: dedicated nodes that will be running Kubernetes services like the apiserver, scheduler, controllermanager ...
  • compute nodes: these are the Kubernetes worker nodes where the compute (deployments) will run

Triton Kubernetes will prompt for package names to use for these nodes. Here we are going to use the k4-highcpu-kvm-3.75G package for all nodes.

docker-engine install script: (https://releases.rancher.com/install-docker/1.12.sh)
############################################################ Environment devcluster will be created on Triton.
devcluster will be running in HA configuration ...
6 dedicated hosts will be created ...
devcluster-etcd-[123] k4-highcpu-kvm-3.75G
devcluster-orchestration-[123] k4-highcpu-kvm-3.75G
5 compute nodes will be created for this environment ...
devcluster-compute-# k4-highcpu-kvm-3.75G Do you want to start the setup? (yes | no) yes

The last prompt before verification of inputs is the docker-engine installation script that should be used. Leave the default here and press Enter.

Apply complete! Resources: 15 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command. State path: terraform.tfstate Outputs: masters = [
<cluster manager node1 ip>,
<cluster manager node2 ip>
] Environment devcluster has been started.
This is an HA setup of Kubernetes cluster so there are 3 dedicated etcd and 3 orchestration nodes.
Cluster Manager URL:
http://<cluster manager node1 ip>:8080/settings/env
Kubernetes Hosts URL:
http://<cluster manager node1 ip>:8080/env/1a7/infra/hosts?mode=dot
Kubernetes Health:
http://<cluster manager node1 ip>:8080/env/1a7/apps/stacks?which=cattle NOTE: Nodes might take a few minutes to connect and come up. To start another environment, run:
./triton-kubernetes.sh -e

Once verification is finished, terraform will start provisioning and configuring a 5 node HA Kubernetes environment. Next we are going create a non-HA Kubernetes environment on GCP.

Create a non-HA, 3 node, K8s environment on GCP

Non-HA Kubernetes environments run their etcd and Kubernetes orchestration services (apiserver, scheduler, controller-manager...) on the compute nodes and don't provision dedicated hosts. Once the global cluster manager is up and running, from the same directory, start triton-kubernetes.sh -e and answer the prompted questions to start provisioning another Kubernetes environment.

Default values will be shown in parentheses and if no input is provided, defaults will be used.

./triton-kubernetes.sh -e
Using /Users/fayaz.ghiasy/work/triton-kubernetes/bin/terraform ... From clouds below:
1. Triton
2. AWS
3. Azure
4. GCP
Which cloud do you want to run your environment on: (1) 4

This environment will be starated on GCP so enter 4 and press Enter.

Path to GCP credentials file: /tmp/test-project-155122-credentials.json
GCP Project ID: test-project-155122

This environment will be running on GCP so the first two questions prompted will be account/project related. GCP credentials file should be the absolute path for your authentication json credentials file for your GCP account. Also providet the project ID to use where the machiens will be provisioned and press Enter.

Name your environment: (gcp-test)
Do you want this environment to run in HA mode? (yes | no) no

Provide a alphanumeric name for this environment and start it in non-HA mode.

Number of compute nodes for gcp-test environment: (3)

Provide the number of worker nodes to create for this kubernetes environment and press Enter.

Compute Region: (us-west1)
Instance Zone: (us-west1-a)

For this environment we can use the default Compute Region and Instance Zone, and press Enter.

What size hosts should be used for gcp-test environment compute nodes: (n1-standard-1)

We will use the default machine type here, press Enter.

docker-engine install script: (https://releases.rancher.com/install-docker/1.12.sh)
############################################################ Environment gcp-test will be created on GCP.
gcp-test will be running in non-HA configuration ...
3 compute nodes will be created for this environment ...
gcp-test-compute-# n1-standard-1 Do you want to start the setup? (yes | no) yes

The last prompt before verification of inputs is the docker-engine installation script that should be used. Leave the default here and press Enter.

Once verification is finished, terraform will start provisioning and configuring a 3 node non-HA Kubernetes environmental cluster. In this environment, the etcd and Kubernetes components all run on the compute (worker) nodes and share resources with deployments.

Deploying your first multi-cloud K8s application.

Now we are ready to deploy our first application on our multi-cloud Kubernetes environment using either the Kubernetes CLI kubectl or the Kubernetes Dashboard.

In this section we will walk through the deployment of a "Ghost blog" app using the Kubernetes Dashboard and the example Kubernetes "Guestbook" app using kubectl.

Deploy an app using the Kubernetes Dashboard (Web UI)

The Kubernetes Dashboard can be used to get an overview of applications running on your cluster, as well as to create or modify individual Kubernetes resources. The Kubernetes Dashboard also provides information on the state of Kubernetes resources in your cluster.

Now, let's deploy Ghost using the Kubernetes Dashboard.

First, get the URL for the Kubernetes Dashboard by going to the Kubernetes Hosts URL provided at the end of the devcluster cluster setup, and click Dashboard under the Kubernetes menu at the top. Note, every environment has a unique Dashboard URL printed to the console output.

In this next page, click Kubernetes UI button to open the dashboard. Once you are in the Kubernetes Dashboard you should see a CREATE button at the top. Click the CREATE button to begin the process of deploying an app on your Kubernetes Environment.

Next, enter the details requested, using the inputs provided in the below image, and then click Deploy.

That's it! Kubernetes should now be starting up your Ghost app and you should see something that looks like this:

Your app is configured to be exposed externally on port 8080. So, you should see the app URL under the services screen. Once the deployment is complete and pods are up, the app should be available.

Deploy an app using the Kubernetes CLI

Now, let's deploy the example Kubernetes Guestbook app using the Kubernetes CLI.

First, get the URL to the Kubernetes CLI config page, which will generate a kubectl config file, that the Triton Kubernetes provided at the end of the environment setup. Note that each K8s environment has it's own unique kubectl URL printed on the console output.

Go to the Kubernetes CLI config URL and click on Generate Config:

From the next screen click Copy to Clipboard and paste the content to the ~/.kube/config file:

Now you should be able to use the kubectl command to deploy your app.

The app we will deploy is called Guestbook. Clone the repository to your local machine, and navigate to the app's directory in your terminal. We'll make one minor change to the configuration file so that we can interact with it using a public IP address for this demo:

git clone https://github.com/kubernetes/examples.git
cd examples/guestbook
vi all-in-one/guestbook-all-in-one.yaml

In that configuration file (all-in-one/guestbook-all-in-one.yaml), uncomment the frontend service type, # type: LoadBalancer, so that it runs as a load balancer:

spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
app: guestbook
tier: frontend

Be sure to save the file.

Now you should be able to use kubectl to deploy the app and get the external URL for the frontend service, which can be used to access the app once the pods are up:

# Deploy guestbook app
$ kubectl create -f all-in-one/guestbook-all-in-one.yaml
service "redis-master" created
deployment "redis-master" created
service "redis-slave" created
deployment "redis-slave" created
service "frontend" created
deployment "frontend" created # Make sure that the pods are up and running
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
frontend 3 3 3 3 2m
redis-master 1 1 1 1 2m
redis-slave 2 2 2 2 2m $ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-88237173-b23b9 1/1 Running 0 2m
frontend-88237173-cq5jz 1/1 Running 0 2m
frontend-88237173-sbkrb 1/1 Running 0 2m
redis-master-343230949-3ll61 1/1 Running 0 2m
redis-slave-132015689-p54lv 1/1 Running 0 2m
redis-slave-132015689-t6z7z 1/1 Running 0 2m # Get the external service IP/URL
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.43.80.176 165.225.175.75 80:30896/TCP 14s
kubernetes 10.43.0.1 <none> 443/TCP 7m
redis-master 10.43.0.176 <none> 6379/TCP 15s
redis-slave 10.43.141.195 <none> 6379/TCP 15s

We can see above, for this demo, all pods are running and the only service exposed externally is the frontend service on 165.225.175.75:80.

The deployment status for all the pods and services can also be viewed using the Kubernetes Dashboard. To check using Dashboard, go to the URL for the Web UI.

For more information on Kubernetes itself, dig into the official Kubernetes user guide or the kubectlcheatsheet.

Multi-cloud Kubernetes with Triton的更多相关文章

  1. 朱晔和你聊Spring系列S1E11:小测Spring Cloud Kubernetes @ 阿里云K8S

    有关Spring Cloud Kubernates(以下简称SCK)详见https://github.com/spring-cloud/spring-cloud-kubernetes,在本文中我们主要 ...

  2. Spring Cloud Config整合Spring Cloud Kubernetes,在k8s上管理配置

    1 前言 欢迎访问南瓜慢说 www.pkslow.com获取更多精彩文章! Kubernetes有专门的ConfigMap和Secret来管理配置,但它也有一些局限性,所以还是希望通过Spring C ...

  3. Springboot整合Spring Cloud Kubernetes读取ConfigMap,支持自动刷新配置

    1 前言 欢迎访问南瓜慢说 www.pkslow.com获取更多精彩文章! Docker & Kubernetes相关文章:容器技术 之前介绍了Spring Cloud Config的用法,但 ...

  4. spring cloud kubernetes之serviceaccount permisson报错

    spring boot项目引用spring-cloud-starter-kubernetes <dependency> <groupId>org.springframework ...

  5. Netflix OSS、Spring Cloud还是Kubernetes? 都要吧!

    Netflix OSS是由Netflix公司主持开发的一套代码框架和库,目的是解决上了规模之后的分布式系统可能出现的一些有趣问题.对于当今时代的Java开发者们来说,Netflix OSS简直就是在云 ...

  6. 【架构】Kubernetes和Spring Cloud哪个部署微服务更好?

    Spring Cloud 和Kubernetes都自称自己是部署和运行微服务的最好环境,但是它们在本质上和解决不同问题上是有很大差异的.在本文中,我们将看到每个平台如何帮助交付基于微服务的架构(MSA ...

  7. 微服务Spring Cloud与Kubernetes比较

    转 http://www.tuicool.com/articles/VnMf2y3 Spring Cloud或Kubernetes都宣称它们是开发运行微服务的最好环境,哪个更好?答案是两个都是,但他们 ...

  8. Cloud Resource

    Cloud Public Cloud Aliyun AWS Azure Cloud Stack SDN指南 DNS:Band Private Cloud DC/OS OpenStack Hybrid ...

  9. Spring Cloud Greenwich 正式发布,Hystrix 即将寿终正寝。。

    Spring Cloud Greenwich 正式版在 01/23/2019 这天正式发布了,下面我们来看下有哪些更新内容. 生命周期终止提醒 Spring Cloud Edgware Edgware ...

随机推荐

  1. RHEL7-Samba共享测试

    Linux<----->windows之间共享 Samba使用2个进程 smb    ip之间的通信用smb  (tcp)       nmb    主机名之间的通信用nmb (netbi ...

  2. 主成分分析PCA

    PCA(Principal Component Analysis)不仅仅是对高维数据进行降维,更重要的是经过降维去除了噪声,发现了数据中的模式. PCA把原先的n个特征用数目更少的m个特征取代,新特征 ...

  3. 解决编译Apache出现的问题:configure: error: APR not found

    今日编译apache时出错: #./configure --prefix……检查编辑环境时出现: checking for APR... noconfigure: error: APR not fou ...

  4. 工作8年对技术学习过程的一些 总结 与 感悟 为什么有时迷茫、无奈 学习编程语言的最高境界最重要的是编程思想 T 字发展 学技术忌讳”什么都会“ 每天进步一点等式图 时间管理矩阵

    工作这些年对技术学习过程的一些 总结 与 感悟(一) 引言 工作了8年,一路走来总有些感触时不时的浮现在脑海中.写下来留个痕迹,也顺便给大家一点参考.希望能给初学者一点帮助. 入门 进入计算机行业,起 ...

  5. 自由是有代价的:聊聊这几年尝试的道路 要想生活好,别看哲学书和思想书。简单看看可以,看多了问题就大了。还是要去研究研究些具体的问题。别jb坐在屋子里,嘴里念着海子的诗,脑袋里想康德想的事情,兜里屁都没有,幻想自己是大国总理,去想影帝是怎么炼成的。

    自由是有代价的:聊聊这几年尝试的道路 现在不愿意写过多的技术文章了,一点是现在做的技术比较偏,写出来看的人也不多,二来是家庭事务比较繁多,没以前那么有时间写了.最近,园子里多了一些写经历的文章,我也将 ...

  6. Spring事务异常回滚,捕获异常不抛出就不会回滚

    最近遇到了事务不回滚的情况,我还考虑说JPA的事务有bug? 我想多了.......    为了打印清楚日志,很多方法我都加tyr catch,在catch中打印日志.但是这边情况来了,当这个方法异常 ...

  7. 如何在windows下安装JDK

    1:卸载 A:一定要删除注册表中的 HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft 项 B:最好用安装工具卸载JDK,如果没有的话就删除JDK文件夹然后用Wise Regis ...

  8. JAVA RSA私钥 加密(签名) 对应 C# RSA私钥 加密(签名)

    非对称密钥RSA算法加解密在C#和Java之间交互的问题,这两天看了很多其他人写的文章,碰到了几个问题,最终解决问题. 参考地址:http://xw-z1985.iteye.com/blog/1837 ...

  9. OAuth2.0实战之微信授权篇

    微信开发三大坑: 微信OAuth2.0授权 微信jssdk签名 微信支付签名 本篇先搞定微信OAuth2.0授权吧! 以简书的登陆页面为例,来了解一下oauth2.0验证授权的一些背景知识: 1) 传 ...

  10. C# WinForm开发系列 - 文章索引

    该系列主要整理收集在使用C#开发WinForm应用文章及相关代码, 平时看到大家主要使用C#来开发Asp.Net应用,这方面的文章也特别多,而关于WinForm的文章相对少很多,而自己对WinForm ...