这里验证第三个方法,原理是将需要装载的数据分载在所有的存储节点上,不同的地方是利用了存储节点提供的InvocationService进行装载,而不是PreloadRequest,

原理如图

前提条件是:

  • 需要知道所有要装载的key值
  • 需要根据存储节点的数目把key值进行分摊,这里是通过
  • Map<Member, List<String>> divideWork(Set members)这个方法,输入Coherence的存储节点成员,输出一个map结构,以member为key,所有的entry key值为value.
  • 装载数据的任务,主要是通过驱动MyLoadInvocable的run方法,把数据在各个节点中进行装载,MyLoadInvocable必须扩展AbstractInvocable并实现PortableObject,不知何解,我尝试实现Seriable方法,结果出错
  • 在拆解所有key值的任务过程中,发现list<String>数组被后面的值覆盖,后来每次放入map的时候新建一个List才避免此现象发生.
  • 不需要实现CacheLoader或者CacheStore方法

Person.java

package dataload;

import java.io.Serializable;

public class Person implements Serializable {
private String Id;
private String Firstname;

public void setId(String Id) {
this.Id = Id;
}

public String getId() {
return Id;
}

public void setFirstname(String Firstname) {
this.Firstname = Firstname;
}

public String getFirstname() {
return Firstname;
}

public void setLastname(String Lastname) {
this.Lastname = Lastname;
}

public String getLastname() {
return Lastname;
}

public void setAddress(String Address) {
this.Address = Address;
}

public String getAddress() {
return Address;
}
private String Lastname;
private String Address;

public Person() {
super();
}

public Person(String sId,String sFirstname,String sLastname,String sAddress) {
Id=sId;
Firstname=sFirstname;
Lastname=sLastname;
Address=sAddress;
}
}

MyLoadInvocable.java

装载数据的任务,主要是通过驱动这个任务的run方法,把数据在各个节点中进行装载

package dataload;

import com.tangosol.io.pof.PofReader;
import com.tangosol.io.pof.PofWriter;
import com.tangosol.io.pof.PortableObject;
import com.tangosol.net.AbstractInvocable;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;

import java.io.IOException;
import java.io.Serializable;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

import java.util.Hashtable;
import java.util.Iterator;
import java.util.List;

import javax.naming.Context;
import javax.naming.InitialContext;

import serp.bytecode.NameCache;

public class MyLoadInvocable extends AbstractInvocable implements PortableObject {

private List<String> m_memberKeys;
private String m_cache;

public MyLoadInvocable() {
super();
}

public MyLoadInvocable(List<String> memberKeys, String cache) {
m_memberKeys = memberKeys;
m_cache = cache;
}

public Connection getConnection() {
Connection m_con = null;
try {
Context ctx = null;

Hashtable<String,String> ht = new Hashtable<String,String>();
ht.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");
ht.put(Context.PROVIDER_URL,"t3://localhost:7001");
ctx = new InitialContext(ht);
javax.sql.DataSource ds= (javax.sql.DataSource) ctx.lookup("ds");

m_con = ds.getConnection();
} catch (Exception e) {
System.out.println(e.getMessage());
}

return m_con;
}

public void run() {
System.out.println("Enter MyLoadInvocable run....");
NamedCache cache = CacheFactory.getCache(m_cache);
Person person = null;
Connection con = getConnection();
String sSQL = "SELECT id, firstname,lastname,address FROM persons WHERE id = ?";
System.out.println("Enter load= "+sSQL);

try
{
PreparedStatement stmt = con.prepareStatement(sSQL);

for(int i = 0; i < m_memberKeys.size(); i++)
{

String id = (String)m_memberKeys.get(i);
//System.out.println(list.get(i));
System.out.println("==========="+id);

stmt.setString(1, id);
ResultSet rslt = stmt.executeQuery();
if (rslt.next())
{
person = new Person(rslt.getString("id"),rslt.getString("firstname"),rslt.getString("lastname"),rslt.getString("address"));
cache.put(person.getId(),person);

}
// stmt.close();

}

stmt.close();

}catch (Exception e)
{
System.out.println("==="+e.getMessage());
}

}

public void readExternal(PofReader in)
throws IOException
{
m_memberKeys = (List<String>) in.readObject(0);
m_cache = (String) in.readObject(1);
}

/**
* {@inheritDoc}
*/
public void writeExternal(PofWriter out)
throws IOException
{
out.writeObject(0, m_memberKeys);
out.writeObject(1, m_cache);
}

}

LoadUsingEP.java

装载的客户端,负责数据分段,InvocationService查找以及驱动。

package dataload;

import com.tangosol.net.CacheFactory;
import com.tangosol.net.InvocationService;
import com.tangosol.net.Member;
import com.tangosol.net.NamedCache;
import com.tangosol.net.PartitionedService;
import com.tangosol.util.InvocableMap;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;

import java.sql.SQLException;

import java.sql.Statement;

import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;

import javax.naming.Context;
import javax.naming.InitialContext;

public class LoaderUsingEP {

private Connection m_con;

public Connection getConnection() {
try {
Context ctx = null;

Hashtable<String,String> ht = new Hashtable<String,String>();
ht.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");
ht.put(Context.PROVIDER_URL,"t3://localhost:7001");
ctx = new InitialContext(ht);
javax.sql.DataSource ds= (javax.sql.DataSource) ctx.lookup("ds");

m_con = ds.getConnection();
} catch (Exception e) {
System.out.println(e.getMessage());
}

return m_con;
}

protected Set getStorageMembers(NamedCache cache)
{
return ((PartitionedService) cache.getCacheService())
.getOwnershipEnabledMembers();
}

protected Map<Member, List<String>> divideWork(Set members)
{
Iterator i = members.iterator();
Map<Member, List<String>> mapWork = new HashMap(members.size());

try {
String sql = "select count(*) from persons";
int totalcount = 0;
int membercount = members.size();
Connection con = getConnection();
Statement st = con.createStatement();
ResultSet rs = st.executeQuery(sql);
while (rs.next())
totalcount = Integer.parseInt(rs.getString(1));

int onecount = totalcount / membercount;
int lastcount = totalcount % membercount;

sql = "select id from persons";

ResultSet rs1 = st.executeQuery(sql);
int count = 0;
int currentworker=0;
ArrayList<String> list=new ArrayList<String>();

while (rs1.next()) {

if (count < onecount) {

list.add(rs1.getString("id"));
count++;
} else {

Member member = (Member) i.next();

ArrayList<String> list2=new ArrayList<String>();
list2.addAll(list);
mapWork.put(member, list2);

list.clear();

/* print the list value
for(Map.Entry<Member, List<String>> entry:mapWork.entrySet()){
System.out.println("first="+entry.getKey());
List<String> memberKeys = entry.getValue();
for(int j = 0; j < memberKeys.size(); j++)
{
System.out.println("firsttime="+memberKeys.get(j));
//System.out.println(list.get(i));
}

}
*/
count=0;
list.add(rs1.getString("id"));
count++;

currentworker ++;

if (currentworker == membercount-1) {
onecount = onecount+lastcount;
}

}

}

Member member = (Member) i.next();
mapWork.put(member, list);

st.close();
con.close();
} catch (Exception e) {
System.out.println("Exception...."+e.getMessage());
}

for(Map.Entry<Member, List<String>> entry:mapWork.entrySet()){
System.out.println("final="+entry.getKey());
List<String> memberKeys = entry.getValue();
for(int j = 0; j < memberKeys.size(); j++)
{
System.out.println(memberKeys.get(j));
}

}
return mapWork;
}

public void load()
{
NamedCache cache = CacheFactory.getCache("SampleCache");

Set members = getStorageMembers(cache);
System.out.println("members"+members.size());

Map<Member, List<String>> mapWork = divideWork(members);

InvocationService service = (InvocationService)
CacheFactory.getService("LocalInvocationService");

for (Map.Entry<Member, List<String>> entry : mapWork.entrySet())
{

Member member = entry.getKey();
List<String> memberKeys = entry.getValue();
System.out.println(memberKeys.size());

MyLoadInvocable task = new MyLoadInvocable(memberKeys, cache.getCacheName());
service.execute(task, Collections.singleton(member), null);
}
}

public static void main(String[] args) {
LoaderUsingEP ep = new LoaderUsingEP();
ep.load();
}

}

需要配置的客户端schema

storage-override-client.xml

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with names that start with 'DBBacked' will be created
as distributed-db-backed.
-->
<cache-mapping>
<cache-name>SampleCache</cache-name>
<scheme-name>distributed-pof</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
DB Backed Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>distributed-pof</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>

<read-write-backing-map-scheme>

<internal-cache-scheme>
<local-scheme/>
</internal-cache-scheme>

<cachestore-scheme>
<class-scheme>
<class-name>dataload.DBCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>persons</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>

<listener/>
<autostart>true</autostart>
<local-storage>false</local-storage>
</distributed-scheme>

<invocation-scheme>
<scheme-name>my-invocation</scheme-name>
<service-name>LocalInvocationService</service-name>
<thread-count>5</thread-count>
<autostart>true</autostart>
</invocation-scheme>

</caching-schemes>
</cache-config>

存储节点的Schema

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with names that start with 'DBBacked' will be created
as distributed-db-backed.
-->
<cache-mapping>
<cache-name>SampleCache</cache-name>
<scheme-name>distributed-pof</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
DB Backed Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>distributed-pof</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>

<read-write-backing-map-scheme>

<internal-cache-scheme>
<local-scheme/>
</internal-cache-scheme>

<cachestore-scheme>
<class-scheme>
<class-name>dataload.DBCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>persons</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>

<listener/>
<autostart>true</autostart>
<local-storage>true</local-storage>
</distributed-scheme>

<invocation-scheme>
<scheme-name>my-invocation</scheme-name>
<service-name>LocalInvocationService</service-name>
<thread-count>5</thread-count>
<autostart>true</autostart>
</invocation-scheme>

</caching-schemes>
</cache-config>

输出结果

可见数据分片装载.

Coherence装载数据的研究 - Invocation Service的更多相关文章

  1. Coherence装载数据的研究-PreloadRequest

    最近给客户准备培训,看到Coherence可以通过三种方式批量加载数据,分别是: Custom application InvocableMap - PreloadRequest Invocation ...

  2. 使用 Hive装载数据的几种方式

    装载数据 1.以LOAD的方式装载数据 LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION( ...

  3. WinCE数据通讯之Web Service分包传输篇

    前面写过<WinCE数据通讯之Web Service篇>那篇对于数据量不是很大的情况下单包传输是可以了,但是对于大数据量的情况下WinCE终端的内存往往会在解包或者接受数据时产生内存溢出. ...

  4. db2 load命令装载数据时定位错误出现的位置

    使用如下命令装载数据(注意CPU_PARALLELISM 1): db2 load from filename.del of del replace into tab_name  CPU_PARALL ...

  5. 总结一下用caffe跑图片数据的研究流程

    近期在用caffe玩一些数据集,这些数据集是从淘宝爬下来的图片.主要是想研究一下对女性衣服的分类. 以下是一些详细的操作流程,这里总结一下. 1 爬取数据.写爬虫从淘宝爬取自己须要的数据. 2 数据预 ...

  6. Android开发 ---ContentProvider数据提供者,Activity和Service就是上下文对象,短信监听器,内容观察者

    1.activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayou ...

  7. 基于Web Service的客户端框架搭建一:C#使用Http Post方式传递Json数据字符串调用Web Service

    引言 前段时间一直在做一个ERP系统,随着系统功能的完善,客户端(CS模式)变得越来越臃肿.现在想将业务逻辑层以下部分和界面层分离,使用Web Service来做.由于C#中通过直接添加引用的方来调用 ...

  8. WinCE数据通讯之Web Service篇

    准备写个WinCE平台与数据库服务器数据通讯交互方面的专题文章,今天先整理个Web Service通讯方式. 公司目前的硬件产品平台是WinCE5.0,数据通讯是连接服务器与终端的桥梁,关系着终端的数 ...

  9. 对Yii 2.0模型rules的理解(load()无法正确装载数据)

    在实际开发中,遇到数据表新增字段而忘记了在对应模型中rules规则中添加新增的字段,而导致load()方法装载不到新增字段,导致新增字段无法写入数据库中.   解决办法:在新增字段后及时在对应模型ru ...

随机推荐

  1. hdu 1399 Starship Hakodate-maru (暴力搜索)

    题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=1399 题目大意:找到满足i*i*i+j*(j+1)*(j+2)/6形式且小于等于n的最大值. #inc ...

  2. Linux音频编程

    1. 背景 在<Jasper语音助理介绍>中, 介绍了Linux音频系统, 本文主要介绍了Linux下音频编程相关内容. 音频编程主要包括播放(Playback)和录制(Record), ...

  3. python排序sorted与sort比较

    Python list内置sort()方法用来排序,也可以用python内置的全局sorted()方法来对可迭代的序列排序生成新的序列. sorted(iterable,key=None,revers ...

  4. [ Python - 4 ] python 装饰器

    装饰器有很多经典的使用场景,例如插入日志.性能测试.事务处理等等,有了装饰器,就可以提取大量函数中与本身功能无关的类似代码,从而达到代码重用的目的. 装饰器有两种写法: 1. 装饰器不传参数 2. 装 ...

  5. 使用WindowManager添加View——悬浮窗口的基本原理

    Android系统中的“窗口”类型虽然很多,但只有两大类是经常使用的:一是由系统进程管理的,称之为“系统窗口”:第二个就是由应用程序产生的,用于显示UI界面的“应用窗口”.如果大家熟悉WindowMa ...

  6. JS:body元素对象的clientWidth、offsetWidth、scrollWidth、clientLeft、offsetLeft、scrollLeft

    document.body.clientWidth 获取body元素对象的内容可视区域的宽度,即clientWidth=width+padding,不包括滚动条. document.body.clie ...

  7. Selenium2+python自动化51-unittest简介【转载】

    前言 熟悉java的应该都清楚常见的单元测试框架Junit和TestNG,这个招聘的需求上也是经常见到的.python里面也有单元测试框架-unittest,相当于是一个python版的junit. ...

  8. APP中常见上下循环滚动通知的简单实现,点击可进入详情

    APP中常见上下循环滚动通知的简单实现,点击可进入详情 关注finddreams博客,一起分享一起进步!http://blog.csdn.net/finddreams/article/details/ ...

  9. 【cocos2d-js官方文档】十一、cc.path

    概述 该单例是为了方便开发者操作文件路径所设计的.定义为cc.path的目的是为了跟nodejs的path保持一致.里面定义的api也基本跟nodejs的path模块一致,但不全有,今后可能还会继续根 ...

  10. MySQL的数据类型和建库策略详解

    无论是在小得可怜的免费数据库空间或是大型电子商务网站,合理的设计表结构.充分利用空间是十分必要的.这就要求我们对数据库系统的常用数据类型有充分的认识.下面我就将我的一点心得写出来跟大家分享. 一.数字 ...