Nginx+Keepalived+Tomcat+Memcached 实现双VIP负载均衡及Session会话保持

IP 信息列表:

名称         IP                     软件
-------------------------------------------------
VIP1          192.168.200.254     
VIP2          192.168.200.253
nginx-1      192.168.200.101        nginx keepalived
nginx-2        192.168.200.102        nginx keepalived
tomcat-1      192.168.200.103        tomcat memcached
tomcat-2    192.168.200.104        tomcat memcached

所有机器关闭防火墙及Selinux:
[root@localhost ~]# service iptables stop
[root@localhost ~]# setenforce 0

安装配置JDK和Tomcat服务器:
=================================================================================================================
安装配置JDK:
将jdk-7u65-linux-x64.tar.gz解压
[root@tomcat-1 ~]# rm -rf /usr/bin/java
[root@tomcat-1 ~]# tar xf jdk-7u65-linux-x64.tar.gz

解压后会生成jdk1.7.0_65文件夹,将文件夹移动到/usr/local下并重命名为java
[root@tomcat-1 ~]# mv jdk1.7.0_65/ /usr/local/java

在/etc/profile.d/ 下建立java.sh脚本
[root@tomcat-1 ~]# vim /etc/profile    #末尾出追加
export JAVA_HOME=/usr/local/java    #设置java根目录
export PATH=$PATH:$JAVA_HOME/bin    #在PATH环境变量中添加java跟目录的bin子目录

将java.sh 脚本导入到环境变量,使其生效
[root@tomcat-1 ~]# source /etc/profile

运行 java -version 或者 javac -version 命令查看java版本
[root@tomcat-1 ~]# java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (rhel-2.5.1.2.el6_5-x86_64 u65-b17)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
[root@tomcat-1 ~]# javac -version
javac 1.7.0_65

安装配置Tomcat:
解压软件包
[root@tomcat-1 ~]# tar xf apache-tomcat-7.0.54.tar.gz

解压后生成apache-tomcat-7.0.54文件夹,将该文件夹移动到/usr/local下,并改名为tomcat7
[root@tomcat-1 ~]# mv apache-tomcat-7.0.54 /usr/local/tomcat7

启动Tomcat
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/startup.sh
Using CATALINA_BASE:   /usr/local/tomcat7
Using CATALINA_HOME:   /usr/local/tomcat7
Using CATALINA_TMPDIR: /usr/local/tomcat7/temp
Using JRE_HOME:        /usr/local/java
Using CLASSPATH:       /usr/local/tomcat7/bin/bootstrap.jar:/usr/local/tomcat7/bin/tomcat-juli.jar
Tomcat started.

Tomcat 默认运行在8080端口
[root@tomcat-1 ~]# netstat -anpt |grep :8080
tcp        0      0 :::8080                     :::*                        LISTE
N      55349/java

关闭Tomcat
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/shutdown.sh

浏览器访问测试 http://192.168.200.103:8080

建立java的web站点:
首先在跟目录下建立一个webapp目录,用于存放网站文件
[root@tomcat-1 ~]# mkdir /webapp

在webapp1目录下建立一个index.jsp的测试页面
[root@tomcat-1 ~]# vim /webapp/index.jsp
Server Info:   
SessionID:<%=session.getId()%>
<br>
SessionIP:<%=request.getServerName()%>  
<br>
SessionPort:<%=request.getServerPort()%>
<br>
<%
  out.println("server one");
%>

修改Tomcat的server.xml文件

定义一个虚拟主机,并将网站文件路径指向已经建立的/webapp,在host段增加context段
[root@tomcat-1 ~]# cp /usr/local/tomcat7/conf/server.xml{,.bak}
[root@tomcat-1 ~]# vim /usr/local/tomcat7/conf/server.xml

124       <Host name="localhost"  appBase="webapps"
125             unpackWARs="true" autoDeploy="true">
126               <Context docBase="/webapp" path="" reloadable="flase">
127               </Context>

docBase="/webapp"         #web应用的文档基准目录
path=""                 #设置默认"类"
reloadable="flase"        #设置监视"类"是否变化

关闭tomcat,在重新启动
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/shutdown.sh
Using CATALINA_BASE:   /usr/local/tomcat7
Using CATALINA_HOME:   /usr/local/tomcat7
Using CATALINA_TMPDIR: /usr/local/tomcat7/temp
Using JRE_HOME:        /usr/local/java
Using CLASSPATH:       /usr/local/tomcat7/bin/bootstrap.jar:/usr/local/tomcat7/bin/tomcat-juli.jar

[root@tomcat-1 ~]# /usr/local/tomcat7/bin/startup.sh
Using CATALINA_BASE:   /usr/local/tomcat7
Using CATALINA_HOME:   /usr/local/tomcat7
Using CATALINA_TMPDIR: /usr/local/tomcat7/temp
Using JRE_HOME:        /usr/local/java
Using CLASSPATH:       /usr/local/tomcat7/bin/bootstrap.jar:/usr/local/tomcat7/bin/tomcat-juli.jar
Tomcat started.

浏览器访问测试 http://192.168.200.103:8080

=================================================================================================================

Tomcat 2 配置方法基本与Tomcat 1 相同
安装JDK,配置Java环境,版本与Tomcat 1 保持一致
安装Tomcat,版本与Tomcat 1 保持一致

[root@tomcat-2 ~]# vim /webapp/index.jsp
Server Info:   
SessionID:<%=session.getId()%>
<br>
SessionIP:<%=request.getServerName()%>  
<br>
SessionPort:<%=request.getServerPort()%>
<br>
<%
  out.println("server two");
%>

[root@tomcat-2 ~]# cp /usr/local/tomcat7/conf/server.xml{,.bak}
[root@tomcat-2 ~]# vim /usr/local/tomcat7/conf/server.xml

124       <Host name="localhost"  appBase="webapps"
125             unpackWARs="true" autoDeploy="true">
126             <Context docBase="/webapp" path="" reloadable="flase" >
127             </Context>

[root@tomcat-2 ~]# /usr/local/tomcat7/bin/shutdown.sh
[root@tomcat-2 ~]# /usr/local/tomcat7/bin/startup.sh

浏览器访问测试 http://192.168.200.104:8080

=================================================================================================================

Tomcat 配置相关说明
/usr/local/tomcat7         #主目录
bin                        #存放windows或linux平台上启动或关闭的Tomcat的脚本文件
conf                    #存放Tomcat的各种全局配置文件,其中最主要的是server.xml和web.xml
lib                        #存放Tomcat运行需要的库文件(JARS)
logs                    #存放Tomcat执行时的LOG文件
webapps                    #Tomcat的主要Web发布目录(包括应用程序事例)
work                    #存放jsp编译后产生的class文件

[root@tomcat-1 ~]# ls /usr/local/tomcat7/conf/
catalina.policy          #权限控制配置文件
catalina.properties      #Tomcat属性配置文件
context.xml              #上下文配置文件(selinux)
logging.properties      #日志log相关配置文件
server.xml              #主配置文件
tomcat-users.xml          #manager-gui管理用户配置文件(Tomcat安装后生成的管理界面,该文件可开启访问)
web.xml                    #Tomcat的servlet,servlet-mapping,filter,MIME等相关配置

server.xml  主要配置文件,可修改启动端口,设置网站根目录,虚拟主机,开启https等功能。

server.xml的结构构成
<Server>
    <Service>
        <Connector />
            <Engine>
                <Host>
                    <Context> </Context>
                </Host>
            </Engine>
    </Service>
</Server>

<!-- -->    内的内容是注视信息

Server
Server元素代表了整个Catalina的Servlet容器

Service
Service是这样一个集合;它由一个或多个Connector组成,以及一个Engine,负责处理所有Connector所获得的客户请求。

Connector
一个Connector在某个指定端口上侦听客户请求,并将获得的请求交给Engine来处理,从Engine处获得回应并返回客户。

TomcatEngine有两个典型的Connector,一个直接侦听来自browser的http请求,一个侦听来自其他webserver的请求
Coyote Http/1.1 Connector在端口8009处侦听来自其他wenserver(Apache)的servlet/jsp代理请求。

Engine
Engine下可以配置多个虚拟主机Virtual Host,每个虚拟主机都有一个域名
当Engine获得一个请求时,它把该请求匹配到某一个Host上,然后把该请求交给该Host来处理,
Engine有一个默认的虚拟主机,当请求无法匹配到任何一个Host上的时候,将交给该默认Host来处理

Host
代表一个Virtual Host,虚拟主机,每个虚拟主机和某个网络域名Domain Name相匹配
每个虚拟主机下都可以部署(deploy)一个或者多个Web app,每个web app 对应一个Context,有一个Context path。

当Host获得一个请求时,将把该请求匹配到某个Context上,然后把该请求交给该Context来处理,匹配的方法是最长匹配,所以一个path==“”的Context将成为该Host的默认Context匹配。

Context
一个Context对应一个 Web application,一个 Web application由一个或者多个Servlet组成。

=================================================================================================================

nginx-1服务器配置:

[root@nginx-1 ~]# yum -y install pcre-devel zlib-devel openssl-devel
[root@nginx-1 ~]# useradd -M -s /sbin/nologin nginx
[root@nginx-1 ~]# tar xf nginx-1.6.2.tar.gz
[root@nginx-1 ~]# cd nginx-1.6.2
[root@nginx-1 nginx-1.6.2]# ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --with-file-aio --with-http_stub_status_module --with-http_ssl_module --with-http_flv_module --with-http_gzip_static_module && make && make install

--prefix=/usr/local/nginx             #指定安装目录
--user=nginx --group=nginx             #指定运行的用户和组
--with-file-aio                     #启用文件修改支持
--with-http_stub_status_module         #启用状态统计
--with-http_ssl_module                 #启用ssl模块
--with-http_flv_module                 #启用flv模块,提供寻求内存使用基于时间的偏移量文件
--with-http_gzip_static_module        #启用gzip静态压缩

配置nginx.conf
[root@nginx-1 nginx-1.6.2]# cp /usr/local/nginx/conf/nginx.conf{,.bak}
[root@nginx-1 nginx-1.6.2]# vim /usr/local/nginx/conf/nginx.conf
=================================================================================================================
user  nginx;
worker_processes  1;
error_log  logs/error.log;
pid        logs/nginx.pid;

events {
    use epoll;
    worker_connections  10240;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  logs/access.log  main;

sendfile        on;
    keepalive_timeout  65;
    upstream tomcat_server {
    server 192.168.200.103:8080 weight=1;
    server 192.168.200.104:8080 weight=1;
    }

server {
        listen       80;
        server_name  localhost;
        location / {
            root   html;
            index  index.html index.htm;
            proxy_pass http://tomcat_server;
        }

error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
=================================================================================================================
[root@nginx-1 nginx-1.6.2]# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful

[root@nginx-1 nginx-1.6.2]# /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
[root@nginx-1 nginx-1.6.2]# netstat -anpt |grep :80
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      7184/nginx

[root@nginx-1 nginx-1.6.2]# ps aux |grep nginx
root      7184  0.0  0.2  45000  1052 ?        Ss   01:18   0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
www       7185  0.0  1.1  49256  5452 ?        S    01:18   0:00 nginx: worker process                                          
root      7193  0.0  0.1 103256   848 pts/1    S+   01:18   0:00 grep nginx

客户端测试:
打开浏览器访问: http://192.168.200.101    #不断刷新可看到由于权重相同,页面会反复切换

nginx-2服务器配置:
配置方式与服务器nginx-1一致

客户端测试:
打开浏览器访问: http://192.168.200.102    #不断刷新可看到由于权重相同,页面会反复切换
=================================================================================================================

工作原理:两台Nginx通过Keepalived生成二个实例,二台Nginx的VIP互为备份,任何一台Nginx机器如果发生硬件损坏,Keepalived会自动将它的VIP地址切换到另一台机器,不影响客户端的访问。

在nginx1/2上编译安装keepalived服务:
[root@nginx-1 ~]# yum -y install kernel-devel openssl-devel

[root@nginx-1 ~]# tar xf keepalived-1.2.13.tar.gz
[root@nginx-1 ~]# cd keepalived-1.2.13
[root@nginx-1 keepalived-1.2.13]# ./configure --prefix=/ --with-kernel-dir=/usr/src/kernels/2.6.32-504.el6.x86_64/ && make && make install
[root@nginx-1 ~]# chkconfig --add keepalived
[root@nginx-1 ~]# chkconfig keepalived on
[root@nginx-1 ~]# chkconfig --list keepalived

3、修改keepalived配置文件
[root@nginx-1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
        crushlinux@163.com
}
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123
    }
    virtual_ipaddress {
        192.168.200.254
    }
}

vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123
    }
    virtual_ipaddress {
        192.168.200.253
    }
}
=================================================================================================================
[root@nginx-2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
        crushlinux@163.com
}
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123
    }
    virtual_ipaddress {
        192.168.200.254
    }
}

vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 52
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123
    }
    virtual_ipaddress {
        192.168.200.253
    }
}

[root@nginx-1 ~]# service keepalived start
[root@nginx-1 ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:2d:3d:97 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.202/24 brd 192.168.200.255 scope global eth0
    inet 192.168.200.254/32 scope global eth0
    inet6 fe80::20c:29ff:fe2d:3d97/64 scope link
       valid_lft forever preferred_lft forever

[root@nginx-2 ~]# service keepalived start
[root@nginx-2 ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:6f:7d:87 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.102/24 brd 192.168.200.255 scope global eth0
    inet 192.168.200.253/32 scope global eth0
    inet6 fe80::20c:29ff:fe6f:7d87/64 scope link
       valid_lft forever preferred_lft forever
       
客户端测试:
打开浏览器访问: http://192.168.200.253    #不断刷新可看到由于权重相同,页面会反复切换
客户端测试:
打开浏览器访问: http://192.168.200.254    #不断刷新可看到由于权重相同,页面会反复切换

nginx-1/2 二台机器都执行监控Nginx进程的脚本
[root@nginx-1 ~]# cat nginx_pidcheck
#!/bin/bash
while :
do
        nginxpid=`ps -C nginx --no-header | wc -l`
        if [ $nginxpid -eq 0 ]
        then
                /usr/local/nginx/sbin/nginx
                keeppid=$(ps -C keepalived --no-header | wc -l)
                if [ $keeppid -eq 0 ]
                then
                        /etc/init.d/keepalived start
                fi
                sleep 5
                nginxpid=`ps -C nginx --no-header | wc -l`
                if [ $nginxpid -eq 0 ]
                then
                        /etc/init.d/keepalived stop
                fi
        fi
        sleep 5
done

[root@nginx-1 ~]# sh nginx_pidcheck &
[root@nginx-1 ~]# vim /etc/rc.local
sh nginx_pidcheck &

这是执行无限循环的脚本,两台Nginx机器上都有执行此脚本,每隔5秒执行一次,用ps -C是命令来收集nginx的PID值到底是否为0,如果是0的话,即Nginx已经进程死掉,尝试启动nginx进程;如果继续为0,即Nginx启动失败,则关闭本机的Keeplaived服务,VIP地址则会由备机接管,当然了,整个网站就会全部由备机的Nginx来提供服务了,这样保证Nginx服务的高可用。

脚本测试:
[root@nginx-1 ~]# netstat -anpt |grep nginx
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      4321/nginx          
[root@nginx-1 ~]# killall -s QUIT nginx
[root@nginx-1 ~]# netstat -anpt |grep nginx
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      59418/nginx

VIP转移测试:
[root@nginx-1 ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:2d:3d:97 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.101/24 brd 192.168.200.255 scope global eth0
    inet 192.168.200.254/32 scope global eth0
    inet6 fe80::20c:29ff:fe2d:3d97/64 scope link
       valid_lft forever preferred_lft forever
       
[root@nginx-2 ~]# service keepalived stop
停止 keepalived:                                          [确定]

[root@nginx-1 ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:2d:3d:97 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.101/24 brd 192.168.200.255 scope global eth0
    inet 192.168.200.254/32 scope global eth0
    inet 192.168.200.253/32 scope global eth0
    inet6 fe80::20c:29ff:fe2d:3d97/64 scope link
       valid_lft forever preferred_lft forever

客户端测试:
打开浏览器访问: http://192.168.200.253    #不断刷新可看到由于权重相同,页面会反复切换
客户端测试:
打开浏览器访问: http://192.168.200.254    #不断刷新可看到由于权重相同,页面会反复切换

=================================================================================================================
[root@tomcat-1 ~]# yum -y install gcc openssl-devel pcre-devel zlib-devel
[root@tomcat-1 ~]# tar xf libevent-2.0.15-stable.tar.gz
[root@tomcat-1 ~]# cd libevent-2.0.15-stable
[root@tomcat-1 libevent-2.0.15-stable]# ./configure --prefix=/usr/local/libevent && make && make install

[root@tomcat-1 ~]# tar xf memcached-1.4.5.tar.gz
[root@tomcat-1 ~]# cd memcached-1.4.5
[root@tomcat-1 memcached-1.4.5]# ./configure --prefix=/usr/local/memcached --with-libevent=/usr/local/libevent/ && make && make install

[root@tomcat-1 memcached-1.4.5]# ldconfig -v |grep libevent
    libevent_pthreads-2.0.so.5 -> libevent_pthreads.so
    libevent-2.0.so.5 -> libevent.so
    libevent_extra-2.0.so.5 -> libevent_extra.so
    libevent_core-2.0.so.5 -> libevent_core.so
    libevent_openssl-2.0.so.5 -> libevent_openssl.so
    libevent_extra-1.4.so.2 -> libevent_extra-1.4.so.2.1.3
    libevent_core-1.4.so.2 -> libevent_core-1.4.so.2.1.3
    libevent-1.4.so.2 -> libevent-1.4.so.2.1.3

[root@tomcat-1 memcached-1.4.5]# /usr/local/memcached/bin/memcached -u root -m 512M -n 10 -f 2 -d -vvv -c 512
/usr/local/memcached/bin/memcached: error while loading shared libraries: libevent-2.0.so.5: cannot open shared object file: No such file or directory

[root@localhost memcached-1.4.5]# vim /etc/ld.so.conf
include ld.so.conf.d/*.conf
/usr/local/libevent/lib/
[root@localhost memcached-1.4.5]# ldconfig
[root@localhost memcached-1.4.5]# /usr/local/memcached/bin/memcached -u root -m 512M -n 10 -f 2 -d -vvv -c 512

选项:
    -h      #查看帮助信息
    -p      #指定memcached监听的端口号默认11211
    -l       #memcached服务器的ip地址
    -u      #memcached程序运行时使用的用户身份必须是root用户
    -m      #指定使用本机的多少物理内存存数据默认64M
    -c       #memcached服务的最大链接数
    -vvv      #显示详细信息
    -n      #chunk size 的最小空间是多少单位字节
    -f       #chunk size大小增长的倍数默认 1.25倍
    -d      #在后台启动
    
[root@tomcat-1 ~]# netstat  -antp| grep :11211    #(检测memecached是否存活,memcacehd 端口为11211)
tcp        0      0 0.0.0.0:11211               0.0.0.0:*                   LISTEN      71559/memcached     
tcp        0      0 :::11211                    :::*                        LISTEN      71559/memcached

测试memcached 能否存取数据
[root@tomcat-1 ~]# yum -y install telnet
[root@localhost ~]# telnet 192.168.200.103 11211
set username 0 0 8     
zhangsan
STORED
get username
VALUE username 0 8
zhangsan
END
quit
Connection closed by foreign host.

最后执行让Tomcat-1  Tomcat-2 通过(msm)连接到Memcached

将session包中的“*.jar复制到/usr/local/tomcat7/lib/ 下面
[root@tomcat-1 ~]# cp session/* /usr/local/tomcat7/lib/

编辑tomcat 配置文件连接指定的  memcached服务器
tomcat-1  和  tomcat-2   配置文件一模一样,都按照一下样例写

[root@tomcat-1 ~]# vim /usr/local/tomcat7/conf/context.xml
<Context>
<Manager    className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="memA:192.168.200.104:11211 memB:192.168.200.105:11211"
requestUrilgnorePattern=".*\(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>

[root@tomcat-2 ~]# vim /usr/local/tomcat7/conf/context.xml
<Context>
<Manager    className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="memA:192.168.200.104:11211 memB:192.168.200.105:11211"
requestUrilgnorePattern=".*\(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/shutdown.sh
[root@tomcat-1 ~]# /usr/local/tomcat7/bin/startup.sh
如果成功,tomcat与Memcached 端口会连在一起,前后有变化
Tomcat-1与Tomcat-2如下图
[root@tomcat-1 ~]# netstat -antp|grep java
tcp        0      0 ::ffff:127.0.0.1:8005       :::*                        LISTEN      62496/java          
tcp        0      0 :::8009                     :::*                        LISTEN      62496/java          
tcp        0      0 :::8080                     :::*                        LISTEN      62496/java          
tcp        0      0 ::ffff:192.168.200.10:28232 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java          
tcp        0      0 ::ffff:192.168.200.10:28231 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java          
tcp        0      0 ::ffff:192.168.200.10:28230 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java          
tcp        0      0 ::ffff:192.168.200.10:28228 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java          
tcp        0      0 ::ffff:192.168.200.10:28229 ::ffff:192.168.200.10:11211 ESTABLISHED 62496/java          
[root@tomcat-1 ~]# netstat -antp|grep memcached
tcp        0      0 0.0.0.0:11211               0.0.0.0:*                   LISTEN      62402/memcached     
tcp        0      0 192.168.200.103:11211       192.168.200.103:28230       ESTABLISHED 62402/memcached     
tcp       45      0 192.168.200.103:11211       192.168.200.103:28228       ESTABLISHED 62402/memcached     
tcp        0      0 192.168.200.103:11211       192.168.200.103:28232       ESTABLISHED 62402/memcached     
tcp        0      0 192.168.200.103:11211       192.168.200.103:28229       ESTABLISHED 62402/memcached     
tcp        0      0 192.168.200.103:11211       192.168.200.103:28231       ESTABLISHED 62402/memcached     
tcp        0      0 :::11211                    :::*                        LISTEN      62402/memcached

nginx+keepalived+tomcat+memcache实现双VIP高可用及Session会话保持的更多相关文章

  1. Tomcat+nginx+keepalived+memcached实现双VIP负载均衡及Session会话保持

    准备好tomcat 第一台 tar vxf apache-tomcat-7.0.54.tar.gz mv apache-tomcat-7.0.54 /usr/local/tomcat tar vxf ...

  2. MySQL5.7 利用keepalived来实现mysql双主高可用方案的详细过程

    Reference:  http://blog.csdn.net/mchdba/article/details/51377989 服务器准备 Keepalived:192.168.13.15 Keep ...

  3. Nginx(haproxy)+keepalived+Tomcat双主高可用负载均衡

    周末的时候一个正在学Linux的朋友问我,高可用怎么玩?我和他微信了将近三个小时,把Nginx和haproxy双主高可用教给他了,今天突然想把这个给写进博客里,供给那些正在学习Linux系统的朋友们, ...

  4. keepalived+mysql实现双主高可用

    环境: DB1:centos6.8.mysql5.5.192.168.2.204  hostname:bogon DB2:centos6.8.mysql5.5.192.168.2.205  hostn ...

  5. MariaDB+Keepalived双主高可用配置MySQL-HA

    利用keepalived构建高可用MySQL-HA,保证两台MySQL数据的一致性,然后用keepalived实现虚拟VIP,通过keepalived自带的服务监控功能来实现MySQL故障时自动切换. ...

  6. 基于Keepalived实现LVS双主高可用集群

    Reference:  https://mp.weixin.qq.com/s?src=3&timestamp=1512896424&ver=1&signature=L1C7us ...

  7. 基于keepalived搭建mysql双主高可用

    目录 概述 环境准备 keepalived搭建 mysql搭建 mysql双主搭建 mysql双主高可用搭建 概述 传统(不借助中间件)的数据库主从搭建,如果主节点挂掉了,从节点只能读取无法写入,只能 ...

  8. nginx+keepalived+tomcat之tomcat性能调优

    body{ font-family: Nyala; font-size: 10.5pt; line-height: 1.5;}html, body{ color: ; background-color ...

  9. MySQL集群(四)之keepalived实现mysql双主高可用

    前面大家介绍了主从.主主复制以及他们的中间件mysql-proxy的使用,这一篇给大家介绍的是keepalived的搭建与使用! 一.keepalived简介 1.1.keepalived介绍 Kee ...

随机推荐

  1. Android合并两个APP的详细做法(掌握)

    有时候因公司需求,要求合并两个APP 使用里面的功能. 平台:Studio 小白鼠:二维码扫描 和自己项目 详细步骤: /**  * 1.将解压后的android/assets目录复制到项目中的mai ...

  2. 2015-2016 ACM-ICPC Pacific Northwest Regional Contest (Div. 2)V - Gears

    Problem V | limit 4 secondsGearsA set of gears is installed on the plane. You are given the center c ...

  3. 2015-2016 ACM-ICPC Pacific Northwest Regional Contest (Div. 2) S Surf

    SurfNow that you've come to Florida and taken up surng, you love it! Of course, you've realized that ...

  4. Unity3D游戏,TCP,WEBCOSKT,HTTP通信架构 weaving-socket

    weaving-socket 详细介绍 项目简介 2017-8-8:新发布功能 增加U3D游戏客户的通讯项目支持,并提供示例内容. 2017-5-5: 新发布 weaving-socket 架构的.n ...

  5. bzoj2679: [Usaco2012 Open]Balanced Cow Subsets(折半搜索)

    2679: [Usaco2012 Open]Balanced Cow Subsets Time Limit: 10 Sec  Memory Limit: 128 MBSubmit: 462  Solv ...

  6. 基于.Net Core的API框架的搭建(3)

    5.加入缓存支持 我们希望为项目增加缓存支持,我们选择Redis做为缓存数据库. 首先,我们在Services目录增加一个缓存接口类ICacheService: using System; using ...

  7. WKWebView 和 UIWebView 允许背景音乐自动播放(记录)

    WKWebView WKWebViewConfiguration *config = [[WKWebViewConfiguration alloc] init]; config.allowsInlin ...

  8. json和Jsonp 使用总结(1)

    1.Json的使用 $.getJSON("subPreview", { jsonDatas: JSON.stringify(jsonData) }, function(data) ...

  9. [Qt Creator 快速入门] 第4章 布局管理

    第3章讲述了一些窗口部件,当时往界面上拖放部件时都是随意放置的,这对于学习部件的使用没有太大的影响,但是,对于一个完善的软件,布局管理却是必不可少的. 无论是想要界面中部件有一个很整齐的排列,还是想要 ...

  10. kafka_2.11-0.8.2.2.tgz的3节点集群的下载、安装和配置(图文详解)

    kafka_2.10-0.8.1.1.tgz的1或3节点集群的下载.安装和配置(图文详细教程)绝对干货 一.安装前准备 1.1 示例机器 二. JDK7 安装 1.1 下载地址 下载地址: http: ...