Kubernetes环境下如何运行Coherence缓存集群
Oracle官方出了一个如何在Docker环境下运行Coherence的技术文档,大家可以参考:
https://github.com/oracle/docker-images/tree/master/OracleCoherence
但是对于一个熟悉Coherence的老司机来说,简单搭建起来只是个初步方案,在客户的环境总是各种特性和定制化配置,所以本文研究的也是如何将已经客户化的Coherence架构构建在Kubernetes开源框架上。
背景架构说明
话不多说,找一个客户的典型的Coherence架构
架构说明:
- Coherence的节点有不同的角色,分成了存储节点,代理节点,管理节点和客户端
- 不同的角色需要配置不同的Cacheconfig文件
- 整个架构是属于Coherence Extend的方式,客户端通过tcp长连接的方式连入Coherence集群
- 存储节点存储数据,代理节点,管理节点和客户端都不存储数据,代理节点主要负责调度客户端请求到Coherence集群中。
Kubernetes环境下架构
区别在于:
- 客户端WebLogic Server基本都是基于Replication Controller部署成一个个的Pod
- 后端的CacheServer角色的节点以及负责代理的节点都是部署成两组不同的Replication Controller
- cacheserver中每个pod都通过flanneld绑定不同的ip,proxy server节点也是绑定不同的ip以及相同的端口9099
现在问题是proxy server的每个ip都是不固定的,这样在weblogic端就需要指定后端动态变化的ip连入集群。初步想法是让proxy pod绑定服务,然后通过servicename接入。
所以首先需要配置的是dns,作用是每个weblogic pod都能通过dns解析servicename,并转换到proxy server的具体地址。
构建Coherence Proxy Images
其实coherence cacheserver和proxyserver可以定制一个image,然后通过不同的参数来切换不同的配置文件,但这里采用最简单的办法,针对不同角色构建不同的images.
在 /home/weblogic/docker/OracleCoherence/dockerfiles/12.2.1.0.0 目录下加入一个文件proxy-cache-config.xml作为proxyserver启动的配置参数。
[root@k8s-node- 12.2.1.0.]# cat proxy-cache-config.xml
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>distributed-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping> <caching-schemes>
<!-- Distributed caching scheme. -->
<distributed-scheme>
<scheme-name>distributed-scheme</scheme-name>
<service-name>DistributedCache</service-name>
<thread-count></thread-count>
<backup-count></backup-count>
<backing-map-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
</local-scheme> </backing-map-scheme>
<autostart>true</autostart>
<local-storage>false</local-storage>
</distributed-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units></high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor></unit-factor>
<expiry-delay>48h</expiry-delay>
</local-scheme> <proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<thread-count></thread-count>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>0.0.0.0</address>
<port></port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<autostart>true</autostart>
</proxy-scheme> </caching-schemes>
</cache-config>
注意这里的address,0.0.0.0意味着可以绑定任何生成的ip.
修改Dockerfile,最好建立一个新的Dockerfile.proxy
[root@k8s-node- 12.2.1.0.]# cat Dockerfile.proxy
# LICENSE CDDL 1.0 + GPL 2.0
#
# ORACLE DOCKERFILES PROJECT
# --------------------------
# This is the Dockerfile for Coherence 12.2. Standalone Distribution
#
# REQUIRED BASE IMAGE TO BUILD THIS IMAGE
# ---------------------------------------
# This Dockerfile requires the base image oracle/serverjre:
# (see https://github.com/oracle/docker-images/tree/master/OracleJava)
#
# REQUIRED FILES TO BUILD THIS IMAGE
# ----------------------------------
# () fmw_12.2.1..0_coherence_Disk1_1of1.zip
#
# Download the Standalone installer from http://www.oracle.com/technetwork/middleware/coherence/downloads/index.html
#
# HOW TO BUILD THIS IMAGE
# -----------------------
# Put all downloaded files in the same directory as this Dockerfile
# Run:
# $ sh buildDockerImage.sh -s
#
# or if your Docker client requires root access you can run:
# $ sudo sh buildDockerImage.sh -s
# # Pull base image
# ---------------
FROM oracle/serverjre: # Maintainer
# ----------
MAINTAINER Jonathan Knight # Environment variables required for this build (do NOT change)
ENV FMW_PKG=fmw_12.2.1..0_coherence_Disk1_1of1.zip \
FMW_JAR=fmw_12.2.1..0_coherence.jar \
ORACLE_HOME=/u01/oracle/oracle_home \
PATH=$PATH:/usr/java/default/bin:/u01/oracle/oracle_home/oracle_common/common/bin \
CONFIG_JVM_ARGS="-Djava.security.egd=file:/dev/./urandom" ENV COHERENCE_HOME=$ORACLE_HOME/coherence # Copy files required to build this image
COPY $FMW_PKG install.file oraInst.loc /u01/
COPY start.sh /start.sh
COPY proxy-cache-config.xml $COHERENCE_HOME/conf/proxy-cache-config.xml RUN useradd -b /u01 -m -s /bin/bash oracle && \
echo oracle:oracle | chpasswd && \
chmod +x /start.sh && \
chmod a+xr /u01 && \
chown -R oracle:oracle /u01 USER oracle # Install and configure Oracle JDK
# Setup required packages (unzip), filesystem, and oracle user
# ------------------------------------------------------------
RUN cd /u01 && $JAVA_HOME/bin/jar xf /u01/$FMW_PKG && cd - && \
$JAVA_HOME/bin/java -jar /u01/$FMW_JAR -silent -responseFile /u01/install.file -invPtrLoc /u01/oraInst.loc -jreLoc $JAVA_HOME -ignoreSysPrereqs -force -novalidation ORACLE_HOME=$ORACLE_HOME && \
rm /u01/$FMW_JAR /u01/$FMW_PKG /u01/oraInst.loc /u01/install.file ENTRYPOINT ["/start.sh"]
区别在于需要把刚才的文件copy到容器中去
然后修改start.sh
[root@k8s-node- 12.2.1.0.]# cat start.sh
#!/usr/bin/env sh #!/bin/sh -e -x -u trap "echo TRAPed signal" HUP INT QUIT KILL TERM main() { COMMAND=server
SCRIPT_NAME=$(basename "${0}")
MAIN_CLASS="com.tangosol.net.DefaultCacheServer" case "${1}" in
server) COMMAND=${}; shift ;;
console) COMMAND=${}; shift ;;
queryplus) COMMAND=queryPlus; shift ;;
help) COMMAND=${}; shift ;;
esac case ${COMMAND} in
server) server ;;
console) console ;;
queryPlus) queryPlus ;;
help) usage; exit ;;
*) server ;;
esac
} # ---------------------------------------------------------------------------
# Display the help text for this script
# ---------------------------------------------------------------------------
usage() {
echo "Usage: ${SCRIPT_NAME} [type] [args]"
echo ""
echo "type: - the type of process to run, must be one of:"
echo " server - runs a storage enabled DefaultCacheServer"
echo " (server is the default if type is omitted)"
echo " console - runs a storage disabled Coherence console"
echo " query - runs a storage disabled QueryPlus session"
echo " help - displays this usage text"
echo ""
echo "args: - any subsequent arguments are passed as program args to the main class"
echo ""
echo "Environment Variables: The following environment variables affect the script operation"
echo ""
echo "JAVA_OPTS - this environment variable adds Java options to the start command,"
echo " for example memory and other system properties"
echo ""
echo "COH_WKA - Sets the WKA address to use to discover a Coherence cluster."
echo ""
echo "COH_EXTEND_PORT - If set the Extend Proxy Service will listen on this port instead"
echo " of the default ephemeral port."
echo ""
echo "Any jar files added to the /lib folder will be pre-pended to the classpath."
echo "The /conf folder is on the classpath so any files in this folder can be loaded by the process."
echo ""
} server() {
PROPS=""
CLASSPATH=""
MAIN_CLASS="com.tangosol.net.DefaultCacheServer"
start
} console() {
PROPS="-Dcoherence.localstorage=false"
CLASSPATH=""
MAIN_CLASS="com.tangosol.net.CacheFactory"
start
} queryPlus() {
PROPS="-Dcoherence.localstorage=false"
CLASSPATH="${COHERENCE_HOME}/lib/jline.jar"
MAIN_CLASS="com.tangosol.coherence.dslquery.QueryPlus"
start
} start() { if [ "${COH_WKA}" != "" ]
then
PROPS="${PROPS} -Dcoherence.wka=${COH_WKA}"
fi if [ "${COH_EXTEND_PORT}" != "" ]
then
PROPS="${PROPS} -Dcoherence.cacheconfig=extend-cache-config.xml -Dcoherence.extend.port=${COH_EXTEND_PORT}"
fi CLASSPATH="/conf:/lib/*:${CLASSPATH}:${COHERENCE_HOME}/conf:${COHERENCE_HOME}/lib/coherence.jar" CMD="${JAVA_HOME}/bin/java -cp ${CLASSPATH} ${PROPS} -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=proxy-cache-config.xml ${JAVA_OPTS} ${MAIN_CLASS} ${COH_MAIN_ARGS}" echo "Starting Coherence ${COMMAND} using ${CMD}" exec ${CMD}
} main "$@"
主要是在最后的java -cp中的修改。修改相应的build文件
最后build image(-s是指用standardalong版本)
sh buildProxyServer.sh -v 12.2.1.0. -s
构建Coherence CacheServer Images
按照类似方法,先新建一个storage-cache-server.xml
[root@k8s-node- 12.2.1.0.]# cat storage-cache-config.xml
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config> <caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>distributed-pof</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>distributed-pof</scheme-name>
<service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <listener/>
<autostart>true</autostart>
<local-storage>true</local-storage>
</distributed-scheme>
</caching-schemes>
</cache-config>
然后建立一个Dockerfile.cacheserver
[root@k8s-node- 12.2.1.0.]# cat Dockerfile.cacheserver
# LICENSE CDDL 1.0 + GPL 2.0
#
# ORACLE DOCKERFILES PROJECT
# --------------------------
# This is the Dockerfile for Coherence 12.2. Standalone Distribution
#
# REQUIRED BASE IMAGE TO BUILD THIS IMAGE
# ---------------------------------------
# This Dockerfile requires the base image oracle/serverjre:
# (see https://github.com/oracle/docker-images/tree/master/OracleJava)
#
# REQUIRED FILES TO BUILD THIS IMAGE
# ----------------------------------
# () fmw_12.2.1..0_coherence_Disk1_1of1.zip
#
# Download the Standalone installer from http://www.oracle.com/technetwork/middleware/coherence/downloads/index.html
#
# HOW TO BUILD THIS IMAGE
# -----------------------
# Put all downloaded files in the same directory as this Dockerfile
# Run:
# $ sh buildDockerImage.sh -s
#
# or if your Docker client requires root access you can run:
# $ sudo sh buildDockerImage.sh -s
# # Pull base image
# ---------------
FROM oracle/serverjre: # Maintainer
# ----------
MAINTAINER Jonathan Knight # Environment variables required for this build (do NOT change)
ENV FMW_PKG=fmw_12.2.1..0_coherence_Disk1_1of1.zip \
FMW_JAR=fmw_12.2.1..0_coherence.jar \
ORACLE_HOME=/u01/oracle/oracle_home \
PATH=$PATH:/usr/java/default/bin:/u01/oracle/oracle_home/oracle_common/common/bin \
CONFIG_JVM_ARGS="-Djava.security.egd=file:/dev/./urandom" ENV COHERENCE_HOME=$ORACLE_HOME/coherence # Copy files required to build this image
COPY $FMW_PKG install.file oraInst.loc /u01/
COPY start.sh /start.sh
COPY storage-cache-config.xml $COHERENCE_HOME/conf/storage-cache-config.xml RUN useradd -b /u01 -m -s /bin/bash oracle && \
echo oracle:oracle | chpasswd && \
chmod +x /start.sh && \
chmod a+xr /u01 && \
chown -R oracle:oracle /u01 USER oracle # Install and configure Oracle JDK
# Setup required packages (unzip), filesystem, and oracle user
# ------------------------------------------------------------
RUN cd /u01 && $JAVA_HOME/bin/jar xf /u01/$FMW_PKG && cd - && \
$JAVA_HOME/bin/java -jar /u01/$FMW_JAR -silent -responseFile /u01/install.file -invPtrLoc /u01/oraInst.loc -jreLoc $JAVA_HOME -ignoreSysPrereqs -force -novalidation ORACLE_HOME=$ORACLE_HOME && \
rm /u01/$FMW_JAR /u01/$FMW_PKG /u01/oraInst.loc /u01/install.file ENTRYPOINT ["/start.sh"]
最后修改start.sh,关键语句是
CMD="${JAVA_HOME}/bin/java -cp ${CLASSPATH} ${PROPS} -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=storage-cache-config.xml ${JAVA_OPTS} ${MAIN_CLASS} ${COH_MAIN_ARGS}"
最后build image
sh buildCacheServer.sh -v 12.2.1.0. -s
通过docker images看到
[root@k8s-node- 12.2.1.0.]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
-domain v2 326bf14bb29f About an hour ago 2.055 GB
oracle/coherence 12.2.1.0.-cacheserver 57a90e86e1d2 hours ago MB
oracle/coherence 12.2.1.0.-proxy 238c85d61468 hours ago MB
在master节点创建一系列ReplicationController
coherence-proxy.yaml
[root@k8s-master ~]# cat coherence-proxy.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: coherence-proxy
spec:
replicas:
template:
metadata:
labels:
coherencecluster: "proxy"
version: "0.1"
spec:
containers:
- name: coherenceproxy
image: oracle/coherence:12.2.1.0.-proxy
ports:
- containerPort:
---
apiVersion: v1
kind: Service
metadata:
name: coherenceproxysvc
labels:
coherencecluster: proxy
spec:
type: NodePort
ports:
- port:
protocol: TCP
targetPort:
nodePort:
selector:
coherencecluster: proxy
coherence-cacheserver.yaml
[root@k8s-master ~]# cat coherence-cacheserver.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: coherence-cacheserver
spec:
replicas:
template:
metadata:
labels:
coherencecluster: "mycluster"
version: "0.1"
spec:
containers:
- name: coherencecacheserver
image: oracle/coherence:12.2.1.0.-cacheserver
kubectl create -f coherence-proxy.yaml kubectl create -f coherence-cacheserver.yaml
然后看看pod是否启动成功
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coherence-cacheserver-96kz7 / Running 1h 192.168.33.4 k8s-node-
coherence-cacheserver-z67ht / Running 1h 192.168.33.3 k8s-node-
coherence-proxy-j7r0w / Running 1h 192.168.33.5 k8s-node-
coherence-proxy-tg8n8 / Running 1h 192.168.33.6 k8s-node-
登录进去后确定coherence集群成员状态,看member成员的个数基本确定已经都加入集群了。
MasterMemberSet(
ThisMember=Member(Id=, Timestamp=-- ::04.244, Address=192.168.33.6:, MachineId=, Location=machine:coherence-proxy-tg8n8,process:, Role=CoherenceServer)
OldestMember=Member(Id=, Timestamp=-- ::16.941, Address=192.168.33.4:, MachineId=, Location=machine:coherence-cacheserver-96kz7,process:, Role=CoherenceServer)
ActualMemberSet=MemberSet(Size=
Member(Id=, Timestamp=-- ::16.941, Address=192.168.33.4:, MachineId=, Location=machine:coherence-cacheserver-96kz7,process:, Role=CoherenceServer)
Member(Id=, Timestamp=-- ::20.836, Address=192.168.33.3:, MachineId=, Location=machine:coherence-cacheserver-z67ht,process:, Role=CoherenceServer)
Member(Id=, Timestamp=-- ::02.144, Address=192.168.33.5:, MachineId=, Location=machine:coherence-proxy-j7r0w,process:, Role=CoherenceServer)
Member(Id=, Timestamp=-- ::04.244, Address=192.168.33.6:, MachineId=, Location=machine:coherence-proxy-tg8n8,process:, Role=CoherenceServer)
)
MemberId|ServiceJoined|MemberState
|-- ::16.941|JOINED,
|-- ::20.836|JOINED,
|-- ::02.144|JOINED,
|-- ::04.244|JOINED
RecycleMillis=
RecycleSet=MemberSet(Size=
Member(Id=, Timestamp=-- ::25.741, Address=192.168.33.6:, MachineId=)
Member(Id=, Timestamp=-- ::25.74, Address=192.168.33.5:, MachineId=)
Member(Id=, Timestamp=-- ::45.413, Address=192.168.33.5:, MachineId=)
Member(Id=, Timestamp=-- ::45.378, Address=192.168.33.6:, MachineId=)
)
) TcpRing{Connections=[]}
IpMonitor{Addresses=, Timeout=15s}
确定service状态
[root@k8s-master ~]# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coherenceproxysvc 10.254.22.102 <nodes> :/TCP 1h
kubernetes 10.254.0.1 <none> /TCP 26d
coherenceproxysvc已经启动。
开始Coherence客户端WebLogic Pod的配置
因为需要修改setDomainEnv.sh文件,将客户端的coherence配置文件写入,所以转入weblogic目录
[root@k8s-node- -domain]# pwd
/home/weblogic/docker/OracleWebLogic/samples/-domain
新建一个proxy-client.xml
[root@k8s-node- -domain]# cat proxy-client.xml
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>extend-dist</scheme-name>
</cache-mapping>
</caching-scheme-mapping> <caching-schemes>
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>coherenceproxysvc</address>
<port></port>
</socket-address>
</remote-addresses>
<connect-timeout>10s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
</cache-config>
需要注意的是address要指到service的名称,依靠dns去解析。
修改Dockerfile,核心是加入JAVA_OPTIONS和CLASSPATH.
FROM oracle/weblogic:12.1.-generic # Maintainer
# ----------
MAINTAINER Bruno Borges <bruno.borges@oracle.com> # WLS Configuration
# -------------------------------
ARG ADMIN_PASSWORD
ARG PRODUCTION_MODE ENV DOMAIN_NAME="base_domain" \
DOMAIN_HOME="/u01/oracle/user_projects/domains/base_domain" \
ADMIN_PORT="" \
ADMIN_HOST="wlsadmin" \
NM_PORT="" \
MS_PORT="" \
PRODUCTION_MODE="${PRODUCTION_MODE:-prod}" \
JAVA_OPTIONS="-Dweblogic.security.SSL.ignoreHostnameVerification=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=/u01/oracle/proxy-client.xml" \
CLASSPATH="/u01/oracle/coherence.jar" \
PATH=$PATH:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/base_domain/bin:/u01/oracle # Add files required to build this image
USER oracle
COPY container-scripts/* /u01/oracle/
COPY coherence.jar /u01/oracle/
COPY proxy-client.xml /u01/oracle/ # Configuration of WLS Domain
WORKDIR /u01/oracle
RUN /u01/oracle/wlst /u01/oracle/create-wls-domain.py && \
mkdir -p /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security && \
echo "username=weblogic" > /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security/boot.properties && \
echo "password=$ADMIN_PASSWORD" >> /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security/boot.properties && \
echo ". /u01/oracle/user_projects/domains/base_domain/bin/setDomainEnv.sh" >> /u01/oracle/.bashrc && \
echo "export PATH=$PATH:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/base_domain/bin" >> /u01/oracle/.bashrc && \
cp /u01/oracle/commEnv.sh /u01/oracle/wlserver/common/bin/commEnv.sh && \
rm /u01/oracle/create-wls-domain.py /u01/oracle/jaxrs2-template.jar # Expose Node Manager default port, and also default http/https ports for admin console
EXPOSE $NM_PORT $ADMIN_PORT $MS_PORT WORKDIR $DOMAIN_HOME # Define default command to start bash.
CMD ["startWebLogic.sh"]
然后build image
docker build -t -domain:v2 --build-arg ADMIN_PASSWORD=welcome1 .
构建weblogic pod作为一个客户端
[root@k8s-master ~]# cat weblogic-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: weblogic
spec:
containers:
- name: weblogic
image: -domain:v2
ports:
- containerPort:
启动以后,通过log确定weblogic启动时确实把我们客户化的参数加入
[root@k8s-master ~]# kubectl logs weblogic
.
.
JAVA Memory arguments: -Djava.security.egd=file:/dev/./urandom
.
CLASSPATH=/u01/oracle/wlserver/../oracle_common/modules/javax.persistence_2..jar:/u01/oracle/wlserver/../wlserver/modules/com.oracle.weblogic.jpa21support_1.0.0.0_2-.jar:/usr/java/jdk1..0_101/lib/tools.jar:/u01/oracle/wlserver/server/lib/weblogic_sp.jar:/u01/oracle/wlserver/server/lib/weblogic.jar:/u01/oracle/wlserver/../oracle_common/modules/net.sf.antcontrib_1.1.0.0_1-0b3/lib/ant-contrib.jar:/u01/oracle/wlserver/modules/features/oracle.wls.common.nodemanager_2.0.0..jar:/u01/oracle/wlserver/../oracle_common/modules/com.oracle.cie.config-wls-online_8.1.0..jar:/u01/oracle/wlserver/common/derby/lib/derbyclient.jar:/u01/oracle/wlserver/common/derby/lib/derby.jar:/u01/oracle/wlserver/server/lib/xqrl.jar:/u01/oracle/coherence.jar
.
PATH=/u01/oracle/wlserver/server/bin:/u01/oracle/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:/usr/java/jdk1..0_101/jre/bin:/usr/java/jdk1..0_101/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/default/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/base_domain/bin:/u01/oracle
.
***************************************************
* To start WebLogic Server, use a username and *
* password assigned to an admin-level user. For *
* server administration, use the WebLogic Server *
* console at http://hostname:port/console *
***************************************************
starting weblogic with Java version:
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) -Bit Server VM (build 25.101-b13, mixed mode)
Starting WLS with line:
/usr/java/jdk1..0_101/bin/java -server -Djava.security.egd=file:/dev/./urandom -Dweblogic.Name=AdminServer -Djava.security.policy=/u01/oracle/wlserver/server/lib/weblogic.policy -Dweblogic.ProductionModeEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=/u01/oracle/proxy-client.xml -Djava.endorsed.dirs=/usr/java/jdk1..0_101/jre/lib/endorsed:/u01/oracle/wlserver/../oracle_common/modules/endorsed -da -Dwls.home=/u01/oracle/wlserver/server -Dweblogic.home=/u01/oracle/wlserver/server -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true weblogic.Server
部署一个HelloWorld.war文件,核心的index.jsp代码是:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<%@page import="java.util.*"%>
<%@page import="com.tangosol.net.*"%>
<%@ page contentType="text/html;charset=windows-1252"%> <html>
<body> This is a Helloworld test</body>
<h3> <%
String mysession;
NamedCache cache;
cache = CacheFactory.getCache("demoCache");
cache.put("eric","eric.nie@oracle.com"); %>
Get Eric Email:<%=cache.get("eric").toString()%>
</h3> </html>
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coherence-cacheserver-96kz7 / Running 1h 192.168.33.4 k8s-node-
coherence-cacheserver-z67ht / Running 1h 192.168.33.3 k8s-node-
coherence-proxy-j7r0w / Running 1h 192.168.33.5 k8s-node-
coherence-proxy-tg8n8 / Running 1h 192.168.33.6 k8s-node-
weblogic / Running 1h 192.168.33.7 k8s-node-
部署后访问
查看后面的weblogic日志
<Jun , :: AM GMT> <Notice> <WebLogicServer> <BEA-> <Started the WebLogic Server Administration Server "AdminServer" for domain "base_domain" running in production mode.>
<Jun , :: AM GMT> <Notice> <WebLogicServer> <BEA-> <The server started in RUNNING mode.>
<Jun , :: AM GMT> <Warning> <Server> <BEA-> <The hostname "localhost", maps to multiple IP addresses: 127.0.0.1, :::::::.>
<Jun , :: AM GMT> <Notice> <WebLogicServer> <BEA-> <Server state changed to RUNNING.>
-- ::29.193/280.892 Oracle Coherence 12.1.3.0. <Info> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded operational configuration from "jar:file:/u01/oracle/coherence/lib/coherence.jar!/tangosol-coherence.xml"
-- ::29.451/281.086 Oracle Coherence 12.1.3.0. <Info> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded operational overrides from "jar:file:/u01/oracle/coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
-- ::29.479/281.114 Oracle Coherence 12.1.3.0. <D5> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified
-- ::29.511/281.146 Oracle Coherence 12.1.3.0. <D5> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Optional configuration override "cache-factory-config.xml" is not specified
-- ::29.525/281.159 Oracle Coherence 12.1.3.0. <D5> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Optional configuration override "cache-factory-builder-config.xml" is not specified
-- ::29.526/281.161 Oracle Coherence 12.1.3.0. <D5> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified Oracle Coherence Version 12.1.3.0. Build
Grid Edition: Development mode
Copyright (c) , , Oracle and/or its affiliates. All rights reserved. -- ::29.672/281.306 Oracle Coherence GE 12.1.3.0. <Info> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Loaded cache configuration from "file:/u01/oracle/proxy-client.xml"; this document does not refer to any schema definition and has not been validated.
-- ::30.225/281.860 Oracle Coherence GE 12.1.3.0. <Info> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Created cache factory com.tangosol.net.ExtensibleConfigurableCacheFactory
-- ::30.507/282.142 Oracle Coherence GE 12.1.3.0. <D5> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Connecting Socket to 10.254.203.94:
-- ::30.534/282.169 Oracle Coherence GE 12.1.3.0. <Info> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Error connecting Socket to 10.254.203.94:: java.net.ConnectException: Connection refused
-- ::11.056/862.693 Oracle Coherence GE 12.1.3.0. <Info> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Restarting Service: ExtendTcpCacheService
-- ::11.156/862.791 Oracle Coherence GE 12.1.3.0. <D5> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Connecting Socket to 10.254.22.102:
-- ::11.188/862.837 Oracle Coherence GE 12.1.3.0. <Info> (thread=[ACTIVE] ExecuteThread: '' for queue: 'weblogic.kernel.Default (self-tuning)', member=n/a): Connected Socket to 10.254.22.102:
问题和定位
关于DNS解析是否正确,可以通过下面命令
iptables -L -v -n -t nat
查看路由是否正确,主要是看coherenceproxysvc的路由是否到正确的pod和端口.
[root@k8s-node- -domain]# iptables -L -v -n -t nat
Chain PREROUTING (policy ACCEPT packets, bytes)
pkts bytes target prot opt in out source destination
1898K KUBE-SERVICES all -- * * 0.0.0.0/ 0.0.0.0/ /* kubernetes service portals */
DOCKER all -- * * 0.0.0.0/ 0.0.0.0/ ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT packets, bytes)
pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT packets, bytes)
pkts bytes target prot opt in out source destination
126K KUBE-SERVICES all -- * * 0.0.0.0/ 0.0.0.0/ /* kubernetes service portals */
DOCKER all -- * * 0.0.0.0/ !127.0.0.0/ ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT packets, bytes)
pkts bytes target prot opt in out source destination
283K MASQUERADE all -- * !docker0 192.168.33.0/ 0.0.0.0/
1716K KUBE-POSTROUTING all -- * * 0.0.0.0/ 0.0.0.0/ /* kubernetes postrouting rules */
RETURN all -- * * 192.168.122.0/ 224.0.0.0/
RETURN all -- * * 192.168.122.0/ 255.255.255.255
MASQUERADE tcp -- * * 192.168.122.0/ !192.168.122.0/ masq ports: -
MASQUERADE udp -- * * 192.168.122.0/ !192.168.122.0/ masq ports: -
MASQUERADE all -- * * 192.168.122.0/ !192.168.122.0/ Chain DOCKER ( references)
pkts bytes target prot opt in out source destination
RETURN all -- docker0 * 0.0.0.0/ 0.0.0.0/ Chain KUBE-MARK-DROP ( references)
pkts bytes target prot opt in out source destination
MARK all -- * * 0.0.0.0/ 0.0.0.0/ MARK or 0x8000 Chain KUBE-MARK-MASQ ( references)
pkts bytes target prot opt in out source destination
MARK all -- * * 0.0.0.0/ 0.0.0.0/ MARK or 0x4000 Chain KUBE-NODEPORTS ( references)
pkts bytes target prot opt in out source destination
KUBE-MARK-MASQ tcp -- * * 0.0.0.0/ 0.0.0.0/ /* default/coherenceproxysvc: */ tcp dpt:
KUBE-SVC-BQXHRGVXFCEH2BHH tcp -- * * 0.0.0.0/ 0.0.0.0/ /* default/coherenceproxysvc: */ tcp dpt: Chain KUBE-POSTROUTING ( references)
pkts bytes target prot opt in out source destination
MASQUERADE all -- * * 0.0.0.0/ 0.0.0.0/ /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000 Chain KUBE-SEP-67FRRWLKQK2OD4HZ ( references)
pkts bytes target prot opt in out source destination
KUBE-MARK-MASQ all -- * * 192.168.33.2 0.0.0.0/ /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- * * 0.0.0.0/ 0.0.0.0/ /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.33.2: Chain KUBE-SEP-GIM2MHZZZBZJL55J ( references)
pkts bytes target prot opt in out source destination
KUBE-MARK-MASQ all -- * * 192.168.0.105 0.0.0.0/ /* default/kubernetes:https */
DNAT tcp -- * * 0.0.0.0/ 0.0.0.0/ /* default/kubernetes:https */ recent: SET name: KUBE-SEP-GIM2MHZZZBZJL55J side: source mask: 255.255.255.255 tcp to:192.168.0.105: Chain KUBE-SEP-IM4M52WKVEC4AZF3 ( references)
pkts bytes target prot opt in out source destination
KUBE-MARK-MASQ all -- * * 192.168.33.6 0.0.0.0/ /* default/coherenceproxysvc: */
DNAT tcp -- * * 0.0.0.0/ 0.0.0.0/ /* default/coherenceproxysvc: */ tcp to:192.168.33.6: Chain KUBE-SEP-LUF3R3GRCSK6KKRS ( references)
pkts bytes target prot opt in out source destination
KUBE-MARK-MASQ all -- * * 192.168.33.2 0.0.0.0/ /* kube-system/kube-dns:dns */
DNAT udp -- * * 0.0.0.0/ 0.0.0.0/ /* kube-system/kube-dns:dns */ udp to:192.168.33.2: Chain KUBE-SEP-ZZECWQBQCJPODCBC ( references)
pkts bytes target prot opt in out source destination
KUBE-MARK-MASQ all -- * * 192.168.33.5 0.0.0.0/ /* default/coherenceproxysvc: */
DNAT tcp -- * * 0.0.0.0/ 0.0.0.0/ /* default/coherenceproxysvc: */ tcp to:192.168.33.5: Chain KUBE-SERVICES ( references)
pkts bytes target prot opt in out source destination
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/ 10.254.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:
KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/ 10.254.254.254 /* kube-system/kube-dns:dns cluster IP */ udp dpt:
KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/ 10.254.254.254 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:
KUBE-SVC-BQXHRGVXFCEH2BHH tcp -- * * 0.0.0.0/ 10.254.22.102 /* default/coherenceproxysvc: cluster IP */ tcp dpt:
KUBE-NODEPORTS all -- * * 0.0.0.0/ 0.0.0.0/ /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-BQXHRGVXFCEH2BHH ( references)
pkts bytes target prot opt in out source destination
KUBE-SEP-ZZECWQBQCJPODCBC all -- * * 0.0.0.0/ 0.0.0.0/ /* default/coherenceproxysvc: */ statistic mode random probability 0.50000000000
KUBE-SEP-IM4M52WKVEC4AZF3 all -- * * 0.0.0.0/ 0.0.0.0/ /* default/coherenceproxysvc: */ Chain KUBE-SVC-ERIFXISQEP7F7OF4 ( references)
pkts bytes target prot opt in out source destination
KUBE-SEP-67FRRWLKQK2OD4HZ all -- * * 0.0.0.0/ 0.0.0.0/ /* kube-system/kube-dns:dns-tcp */ Chain KUBE-SVC-NPX46M4PTMTKRN6Y ( references)
pkts bytes target prot opt in out source destination
KUBE-SEP-GIM2MHZZZBZJL55J all -- * * 0.0.0.0/ 0.0.0.0/ /* default/kubernetes:https */ recent: CHECK seconds: reap name: KUBE-SEP-GIM2MHZZZBZJL55J side: source mask: 255.255.255.255
KUBE-SEP-GIM2MHZZZBZJL55J all -- * * 0.0.0.0/ 0.0.0.0/ /* default/kubernetes:https */ Chain KUBE-SVC-TCOU7JCQXEZGVUNU ( references)
pkts bytes target prot opt in out source destination
KUBE-SEP-LUF3R3GRCSK6KKRS all -- * * 0.0.0.0/ 0.0.0.0/ /* kube-system/kube-dns:dns */
可优化之处:
- 构建image脚本沿用官方修改,很多不必要的地方可以删除
- cacheserver和proxy的image可以采用同一个,通过不同的参数调用不同的命令启动。
Kubernetes环境下如何运行Coherence缓存集群的更多相关文章
- ubuntu14.04环境下利用docker搭建solrCloud集群
在Ubuntu14.04操作系统的宿主机中,安装docker17.06.3,将宿主机的操作系统制作成docker基础镜像,之后使用自制的基础镜像在docker中启动3个容器,分配固定IP,再在3个容器 ...
- EHCache分布式缓存集群环境配置
EHCache分布式缓存集群环境配置 ehcache提供三种网络连接策略来实现集群,rmi,jgroup还有jms.同时ehcache可以可以实现多播的方式实现集群,也可以手动指定集群主机序列实现集群 ...
- Windows下ELK环境搭建(单机多节点集群部署)
1.背景 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的错误及错误发生的原因.经常分析日志可以了解服务器的负荷,性能安全性,从而及时 ...
- Kubernetes环境下的各种调试方法
作者:Jack47 转载请保留作者和原文出处 欢迎关注我的微信公众账号程序员杰克,两边的文章会同步,也可以添加我的RSS订阅源. 本文介绍在Kubernetes环境下的调试方法,希望对读者有用.如果关 ...
- 项目总结10:通过反射解决springboot环境下从redis取缓存进行转换时出现ClassCastException异常问题
通过反射解决springboot环境下从redis取缓存进行转换时出现ClassCastException异常问题 关键字 springboot热部署 ClassCastException异常 反射 ...
- EhCache 分布式缓存/缓存集群
开发环境: System:Windows JavaEE Server:tomcat5.0.2.8.tomcat6 JavaSDK: jdk6+ IDE:eclipse.MyEclipse 6.6 开发 ...
- EhCache 分布式缓存/缓存集群(转)
开发环境: System:Windows JavaEE Server:tomcat5.0.2.8.tomcat6 JavaSDK: jdk6+ IDE:eclipse.MyEclipse 6.6 开发 ...
- Window平台搭建Redis分布式缓存集群 (一)server搭建及性能測试
百度定义:Redis是一个key-value存储系统.和Memcached类似,它支持存储的value类型相对很多其它.包含string(字符串).list(链表).set(集合).zset(sort ...
- Linux系统下安装Redis和Redis集群配置
Linux系统下安装Redis和Redis集群配置 一. 下载.安装.配置环境: 1.1.>官网下载地址: https://redis.io/download (本人下载的是3.2.8版本:re ...
随机推荐
- [NOIP2017]列队 (Splay)
题目链接 NOIP2017真的是不按常理出牌: 1.数学题不在Day2T1 2.一道水题一道细节极多的模拟题一道不知道怎么形容的题(小凯的疑惑)(因为我太菜了) 3.3道大火题 当时看到列队这题是毫无 ...
- laravel 获得各个根文件夹路径的方法及路由的一些使用
各个根文件夹路径的方法 APP目录: app_path(); config目录: config_path(); public目录: public_path(); storage目录: storage_ ...
- [Leetcode Week9]Minimum Path Sum
Minimum Path Sum 题解 原创文章,拒绝转载 题目来源:https://leetcode.com/problems/minimum-path-sum/description/ Descr ...
- 对象存储 - Swift 原理 及 Swift+keystone+dashboard 架构搭建
1. 原理介绍 Swift 架构.原理及功能: http://www.cnblogs.com/sammyliu/p/4955241.html 总结的很详细也很全面,受益匪浅,感谢分享. 2. keys ...
- webupload在IE9-出现的问题解决
1,点击选择按钮无反应 原因:IE9调用的是flash上传,Upload.swf路径错误!!导致 解决:检查Upload.swf路径是否准确!!!!直接改成cdn地址,是最直接有效的办法. 2,IE9 ...
- 使用 URLDecoder 和 URLEncoder 对统一认证中的http地址转义字符进行处理
import java.io.UnsupportedEncodingException; import java.net.URLDecoder; import java.net.URLEncoder; ...
- 在阿里云“专有网络”网络类型中配置vsftpd
原文地址:传送门 环境:云服务器ECS,网络类型为“专有网络”,创建ECS绑定公网IP:系统镜像为Debian 8 现象:FTP客户端可以连接FTP服务端,但“读取目录列表失败”. 原因 ...
- thinkphp实现功能:验证码
1.定义验证码函数 public function verify(){ /** * 在thinkPHP中如何实现验证码 * * ThinkPHP已经为我们提供了图像处理的类库ThinkPHP\Exte ...
- POJ 2255 Tree Recovery && Ulm Local 1997 Tree Recovery (二叉树的前中后序遍历)
链接:poj.org/problem?id=2255 本文链接:http://www.cnblogs.com/Ash-ly/p/5463375.html 题意: 分别给你一个二叉树的前序遍历序列和中序 ...
- java中的3大特性之多态
一.多态:一个对象具有多种表现形态(父类的引用类型变量指向了子类的对象) 二.多态的满足条件:1.必须要有继承关系2.必须要有方法的重写 三.int[]a; //a引用类型变量-->//引用in ...