1,java.lang.ClassNotFoundException Unknown pair

1.Please try to turn on isStoreKeepBinary in cache settings - like this; please note the last line:

down vote

accepted

Please try to turn on isStoreKeepBinary in cache settings - like this; please note the last line:

if (persistence){

// Configuring Cassandra's persistence

DataSource dataSource = new DataSource();

// ...here go the rest of your settings as they appear now...

configuration.setWriteBehindEnabled(true);

    configuration.setStoreKeepBinary(true);
}

This setting forces Ignite to avoid binary deserialization when working with underlying cache store.

2.I can reproduce it when, in loadCaches(), I put something that isn't exactly the expected Item in the cache:

private void loadCache(IgniteCache<Integer, Item> cache, /* Ignite.binary() */ IgniteBinary binary) {

// Note the absence of package name here:

BinaryObjectBuilder builder = binary.builder("Item");

builder.setField("name", "a");

builder.setField("brand", "B");

builder.setField("type", "c");

builder.setField("manufacturer", "D");

builder.setField("description", "e");

builder.setField("itemId", 1);

参考链接:

http://apache-ignite-users.70518.x6.nabble.com/ClassNotFoundException-with-affinity-run-td5359.html

https://stackoverflow.com/questions/44781672/apache-ignite-java-lang-classnotfoundexception-unknown-pair#

https://stackoverflow.com/questions/47502111/apache-ignite-ignitecheckedexception-unknown-pair#

2,java.lang.IndexOutOfBoundsException + Failed to wait for completion of partition map exchange

异常描述:

2018-06-06 14:24:02.932 ERROR 17364 --- [ange-worker-#42] .c.d.d.p.GridDhtPartitionsExchangeFuture : Failed to reinitialize local partitions (preloading will be stopped):
...
java.lang.IndexOutOfBoundsException: index 678
... org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2279) [ignite-core-2.3.0.jar:2.3.0]
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) [ignite-core-2.3.0.jar:2.3.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] 2018-06-06 14:24:02.932 INFO 17364 --- [ange-worker-#42] .c.d.d.p.GridDhtPartitionsExchangeFuture : Finish exchange future [startVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], resVer=null, err=java.lang.IndexOutOfBoundsException: index 678]
2018-06-06 14:24:02.941 ERROR 17364 --- [ange-worker-#42] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for completion of partition map exchange (preloading will not start): GridDhtPartitionsExchangeFuture
...
org.apache.ignite.IgniteCheckedException: index 678
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7252) ~[ignite-core-2.3.0.jar:2.3.0]
....
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2279) ~[ignite-core-2.3.0.jar:2.3.0]
... 2 common frames omitted

出现这个情况的原因如下:

如果定义的缓存类型是REPLICATED模式,并且开启了持久化,后面将其改为PARTITIONED模式,并导入数据,后续重启的时候就会报这个错误。

比如下面这种情况:

default-config.xml

        <property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
...
<property name="name" value="Test"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="cacheMode" value="REPLICATED"/>
...
</bean>
</list>
</property>
        ignite.destroyCache("Test");
IgniteCache<Long, CommRate> cache = ignite.getOrCreateCache("Test");

当重新启动的时候,default-config.xml中的配置先生效,所以会出现这个问题。

解决办法就是在持久化模式下不要更改缓存模式,或者不要在配置文件中预定义缓存类型。

I can't reproduce your case. But the issue could occur if you had a REPLICATED cache and after some time changed it to PARTITIONED and for example call to getOrCreateCache keeping old cache name.

参考链接:

http://apache-ignite-users.70518.x6.nabble.com/Weird-index-out-bound-Exception-td14905.html

3,Failed to find SQL table for type xxxx

导入数据有误,将该cache destroy掉重新导入.

4, ignite消息机制出现重复消息并且按执行次数递增

ignite消息机制出现重复消息并且按执行次数递增的原因是添加了多次监听器。

针对相同主题的remoteListen和localListen都只应该执行一次,不然每重复执行一次就会多增加一个监听器,

然后表现出的现象就像是消息按执行次数重复发。

    private AtomicBoolean rmtMsgInit = new AtomicBoolean(false);
private AtomicBoolean localMsgInit = new AtomicBoolean(false);
@RequestMapping("/msgTest")
public @ResponseBody
String orderedMsg(HttpServletRequest request, HttpServletResponse response) {
/***************************remote message****************************/
IgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes()); /**相同的消息监听只能设置一次,不然会出现接收到重复消息,并且按次数递增*/
if(!rmtMsgInit.get()) {
rmtMsg.remoteListen("MyOrderdTopic", (nodeId, msg) -> {
System.out.println("Received ordered message [msg=" + msg +", from=" + nodeId + "]");
return true;
});
rmtMsgInit.set(true);
} rmtMsg.send("MyOrderdTopic", UUID.randomUUID().toString());
// for (int i=0; i < 10; i++) {
// rmtMsg.sendOrdered("MyOrderdTopic", Integer.toString(i), 0);
// rmtMsg.send("MyOrderdTopic", Integer.toString(i));
// } /***************************local message****************************/
IgniteMessaging localMsg = ignite.message(ignite.cluster().forLocal()); /**相同的消息监听只能设置一次,不然会出现接收到重复消息,并且按次数递增*/
if(!localMsgInit.get()){
localMsg.localListen("localTopic", (nodeId, msg) -> {
System.out.println(String.format("Received local message [msg=%s, from=%s]", msg, nodeId));
return true;
});
localMsgInit.set(true);
} localMsg.send("localTopic", UUID.randomUUID().toString()); return "executed!";
}

5,ignite远程执行(remote)之类的操作控制台无打印

一般在ignite.cluster().forRemotes()远程执行相关的操作的时候,程序可能会在其他节点执行,

所以打印的日志和输出也会在节点上输出,而程序终端不一定会有输出。

例如:

    IgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes());

    rmtMsg.remoteListen("MyOrderdTopic", (nodeId, msg) -> {
System.out.println("Received ordered message [msg=" + msg +", from=" + nodeId + "]");
return true;
});

如果想在程序端看到效果,可以使用本地模式:

IgniteMessaging.localListen

ignite.events().localListen

6,ignite持久化占用磁盘空间过大

wal日志机制

增加如下配置,修改wal日志同步频率

        <!-- Redefining maximum memory size for the cluster node usage. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration"> ... <!--Checkpointing frequency which is a minimal interval when the dirty pages will be written to the Persistent Store.-->
<property name="checkpointFrequency" value="180000"/> <!-- Number of threads for checkpointing.-->
<property name="checkpointThreads" value="4"/> <!-- Number of checkpoints to be kept in WAL after checkpoint is finished.-->
<property name="walHistorySize" value="20"/> ...
</bean>
</property>

7,java.lang.ClassCastException org.cord.xxx cannot be cast to org.cord.xxx

java.lang.ClassCastException org.cord.ignite.data.domain.Student cannot be cast to org.cord.ignite.data.domain.Student

在从ignite中查询缓存的时候出现该异常,明明是相同的类,但是却无法接收获取的缓存对象:

        IgniteCache<Long, Student> cache = ignite.cache(CacheKeyConstant.STUDENT);
Student student = cache.get(1L);

于是使用instanceof进行分析:

cache.get(1L) instanceof Student返回false

说明从ignite中返回的对象不是Student的实例,但是debug看类的属性都是相同的,那么只有一种可能,ignite中查询出来的对象用的Student和当前接收结果的Student使用的类加载器不同。

于是查看两者的类加载器:

cache.get(1L).getClass().getClassLoader()
=> AppClassLoader Student.class.getClassLoader()
=> RestartClassLoader

果然,两个类的类加载器不同,经过度娘,RestartClassLoader是spring-boot-devtools热部署插件使用的类加载器。问题找到了,这样就好办了,去掉spring-boot-devtools的依赖后即可。

8,Ignite持久化情况下使用SqlFieldQuery查询数据中文乱码

普通模式正常,而开启持久化之后,如果是使用SqlQuery查询的结果是对象,数据不乱码(有反序列化),但是如果是使用SqlFieldQuery则出现乱码。持久化是将内存的数据持久化到磁盘,这说明可能跟文件的编码有关,于是打印一下每个节点的文件编码:System.getProperty("file.encoding"),结果发现持久化的节点的编码为 gb18030,设置file.encoding=UTF-8之后,重新导入数据再查询,不再出现乱码情况了。

通过设置环境变量 JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8 即可

9,[Ignite2.7] java.lang.IllegalAccessError: tried to access field org.h2.util.LocalDateTimeUtils.LOCAL_DATE from class org.apache.ignite.internal.processors.query.h2.H2DatabaseType

这是h2兼容问题导致的,使用最新的h2版本即可

      <dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-indexing</artifactId>
<version>${ignite.version}</version>
<exclusions>
<exclusion>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.4.197</version>
</dependency>

10, Failed to serialize object...Failed to write field...Failed to marshal object with optimized marshaller 分布式计算无法传播到其它节点

具体报错信息如下:

o.a.i.i.m.d.GridDeploymentLocalStore     : Class locally deployed: class org.cord.ignite.controller.ComputeTestController
2018-12-20 21:13:05.398 ERROR 16668 --- [nio-8080-exec-1] o.a.i.internal.binary.BinaryContext : Failed to serialize object [typeName=o.a.i.i.worker.WorkersRegistry]
org.apache.ignite.binary.BinaryObjectException: Failed to write field [name=registeredWorkers] at org.apache.ignite.internal.binary.BinaryFieldAccessor.write(BinaryFieldAccessor.java:164) [ignite-core-2.7.0.jar:2.7.0]
...
Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to marshal object with optimized marshaller: {...}
Caused by: org.apache.ignite.IgniteCheckedException: Failed to serialize object: {...}
Caused by: java.io.IOException: Failed to serialize object [typeName=java.util.concurrent.ConcurrentHashMap]
Caused by: java.io.IOException: java.io.IOException: Failed to serialize object
...
Caused by: java.io.IOException: Failed to serialize object [typeName=java.util.ArrayDeque]
Caused by: java.io.IOException: java.lang.NullPointerException
...

在分布式计算类中如果含有特殊注入的bean的话会导致分布式计算传播异常,例如下面这样:

...
@Autowired
private IgniteConfiguration igniteCfg; String broadcastTest() {
IgniteCompute compute = ignite.compute();
compute.broadcast(() -> System.out.println("Hello Node: " + ignite.cluster().localNode().id()));
return "all executed.";
}

这些bean是无法被传播的,所以在分布式计算类中 除了ignite实例注入,最好不要随便注入其它的bean,如果是更复杂的场景可以考虑服务网格;

11,WARNING: Exception during batch send on streamed connection close; java.sql.BatchUpdateException: class org.apache.ignite.IgniteCheckedException: Data streamer has been closed

ignite jdbc在进行批量插入操作的时候,如果重复打开流或者流不是顺序模式容易出现这个错误。解决办法:在创建jdbc connection的时候设置打开流;开启流的时候设置为顺序模式: SET STREAMING ON ORDERED

String url = "jdbc:ignite:thin://127.0.0.1/";
String[] sqls = new String[]{};
Properties properties = new Properties();
properties.setProperty(IgniteJdbcDriver.PROP_STREAMING, "true");
properties.setProperty(IgniteJdbcDriver.PROP_STREAMING_ALLOW_OVERWRITE, "true");
try (Connection conn = DriverManager.getConnection(url, properties)){
Statement statement = conn.createStatement();
for (String sql : sqls) {
statement.addBatch(sql);
}
statement.executeBatch();
}

参考链接:https://issues.apache.org/jira/browse/IGNITE-10991

http://apache-ignite-users.70518.x6.nabble.com/Data-streamer-has-been-closed-td26521.html


12,java.lang.IllegalArgumentException: Ouch! Argument is invalid: timeout cannot be negative: -2

如果超时参数设置的太大导致溢出,则启动会抛出这个异常。例如像下面这样设置:

            igniteCfg.setFailureDetectionTimeout(Integer.MAX_VALUE);
igniteCfg.setNetworkTimeout(Long.MAX_VALUE);

13,ddl创建的表怎么进行集群分组

with语句中有个TEMPLATE参数,它既可以简单的指定复制(REPLICATED)和分区(PARTITIONED,也可以指定CacheConfiguration的实例,所以可以将ddl与xml中的cache进行关联即可进行集群分组。但是CacheConfiguration如果添加配置默认会创建一个cache,这时候可以通过在cache name后面加一个*号,这样就不会创建对应的cache,这时候ddl就可以与该配置进行关联,示例:

	    <property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="student*"/>
<property name="cacheMode" value="REPLICATED"/>
<property name="nodeFilter"> <!--配置节点过滤器-->
<bean class="org.cord.ignite.initial.DataNodeFilter"/>
</property>
</bean>
</list>
</property>
CREATE TABLE IF NOT EXISTS PUBLIC.STUDENT (
STUDID INTEGER,
NAME VARCHAR,
EMAIL VARCHAR,
dob Date,
PRIMARY KEY (STUDID, NAME))
WITH "template=student,atomicity=ATOMIC,cache_name=student";

14, Failed to communicate with Ignite cluster

瘦客户端(IgniteJdbcThinDriver)并不是线程安全的,如果要使用瘦客户端并发执行sql查询,则需要为每个线程各自创建Connection

参考链接:https://stackoverflow.com/questions/49792329/failed-to-communicate-with-ignite-cluster-while-trying-to-execute-multiple-queri


15,dbeaver关联查询有部分数据关联不到

dbeaver是瘦客户端,如果关联的缓存的模式有是分区模式的,则关联查询需要开启分布式关联,开启方式为在连接url中添加distributedJoins=true的配置,示例:

jdbc:ignite:thin://127.0.0.1:10800;distributedJoins=true

16,WARN [H2TreeIndex] Indexed columns of a row cannot be fully inlined into index what may lead to slowdown due to additional data page reads, increase index inline size if needed

主键的inlineSize怎么指定?

H2TreeIndex.computeInlineSize(List<InlineIndexHelper> inlineIdxs, int cfgInlineSize)

《|》

int confSize = cctx.config().getSqlIndexMaxInlineSize()

private int sqlIdxMaxInlineSize = DFLT_SQL_INDEX_MAX_INLINE_SIZE = -1;

IGNITE_MAX_INDEX_PAYLOAD_SIZE_DEFAULT=10

也就是说如果建索引的时候不指定inlinesize的话默认就是10;

recommendedInlineSize计算规则:

H2Tree.inlineSizeRecomendation(SearchRow row)

InlineIndexHelper.inlineSizeOf(Value val)

InlineIndexHelper.InlineIndexHelper(String colName, int type, int colIdx, int sortType, CompareMode compareMode)

通过python计算inlineSize:

import os
import cx_Oracle as oracle
os.environ["NLS_LANG"] = ".UTF8"
db = oracle.connect('cord/123456@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=1520)))(CONNECT_DATA=(SID=orcl)))')
cursor = db.cursor() query_index_name="select index_name from ALL_INDEXES where table_name='%s' and index_type='NORMAL' and uniqueness='NONUNIQUE'"
query_index_column="select column_name from all_ind_columns where table_name='%s' and index_name='%s'"
query_index_column_type="select data_type,data_length from all_tab_columns where table_name='%s' and column_name='%s'" def inlineSizeOf(data_type, data_length):
if data_type == 'VARCHAR2':
return data_length + 3
if data_type == 'DATE':
return 16+1
if data_type == 'NUMBER':
return 8+1
return -1 def computeInlineSize(tableName):
table=tableName.upper()
retmap = {}
###查询索引名
ret = cursor.execute(query_index_name % table).fetchall()
if len(ret) == 0:
print("table[%s] not find any normal index" % table)
return
###根据索引名获取索引字段名
for indexNames in ret:
# print(indexNames[0])
indexName = indexNames[0]
result = cursor.execute(query_index_column % (table, indexName)).fetchall()
if len(result) == 0:
print("table[%s] index[%s] not find any column" % (table, indexName))
continue
inlineSize=0
###根据字段获取字段类型并计算inlineSze
for columns in result:
column=columns[0]
type_ret = cursor.execute(query_index_column_type % (table, column)).fetchall()
if len(result) == 0:
print("table[%s] index[%s] column[%s] not find any info" % (table, indexName, column))
continue
data_type = type_ret[0][0]
data_length = type_ret[0][1]
temp = inlineSizeOf(data_type, data_length)
if temp == -1:
print("table[%s] index[%s] column[%s] type[%s] unknown" % (table, indexName, column, data_type))
inlineSize += inlineSizeOf(data_type, data_length)
retmap[indexName] = inlineSize
print(retmap) if __name__ == '__main__':
computeInlineSize('PERSON')

apache ignite系列(八):问题汇总的更多相关文章

  1. apache ignite系列(九):ignite调优

    1,配置文件调优 1.1 设置页面大小(pagesize) 先查看系统pagesiz,使用PAGE_SIZE或者PAGESIZE # getconf PAGE_SIZE 4096 # getconf ...

  2. apache ignite系列(九):使用ddl和dml脚本初始化ignite并使用mybatis查询缓存

    博客又断了一段时间,本篇将记录一下基于ignite对jdbc支持的特性在实际使用过程中的使用. 使用ddl和dml脚本初始化ignite 由于spring-boot中支持通过spring.dataso ...

  3. apache ignite系列(六): 服务网格

    简介 ​ 服务网格本质上还是远程方法调用(RPC),而在ignite中注册的服务本质体现还是以cache的形式存在,集群中的节点可以相互调用部署在其它节点上的服务,而且ignite集群会负责部署服务的 ...

  4. apache ignite系列(四):持久化

    ignite持久化与固化内存 1.持久化的机制 ignite持久化的关键点如下: ignite持久化可防止内存溢出导致数据丢失的情况: 持久化可以定制化配置,按需持久化; 持久化能解决在大量缓存数据情 ...

  5. apache ignite系列(三):数据处理(数据加载,数据并置,数据查询)

    ​ 使用ignite的一个常见思路就是将现有的关系型数据库中的数据导入到ignite中,然后直接使用ignite中的数据,相当于将ignite作为一个缓存服务,当然ignite的功能远不止于此,下面以 ...

  6. apache ignite系列(二):配置

    ignite有两种配置方式,一种是基于XML文件的配置,一种是基于JAVA代码的配置: 这里将ignite常用的配置集中罗列出来了,一般建议使用xml配置. 1,基于XML的配置 <beans ...

  7. apache ignite系列(一): 简介

    apache-ignite简介(一) 1,简介 ​ ignite是分布式内存网格的一种实现,其基于java平台,具有可持久化,分布式事务,分布式计算等特点,此外还支持丰富的键值存储以及SQL语法(基于 ...

  8. apache ignite系列(五):分布式计算

    ignite分布式计算 在ignite中,有传统的MapReduce模型的分布式计算,也有基于分布式存储的并置计算,当数据分散到不同的节点上时,根据提供的并置键,计算会传播到数据所在的节点进行计算,再 ...

  9. 在 Apache 上使用网络安全服务(NSS)实现 HTTPS--RHCE 系列(八)

        在 Apache 上使用网络安全服务(NSS)实现 HTTPS--RHCE 系列(八) 发布:linux培训 来源:Linux认证 时间:2015-12-21 15:26 分享到: 达内lin ...

随机推荐

  1. JVM调优之经验

    在生产系统中,高吞吐和低延迟一直都是JVM调优的最终目标,但这两者恰恰又是相悖的,鱼和熊掌不可兼得,所以在调优之前要清楚舍谁而取谁.一般计算任务和组件服务会偏向高吞吐,而web展示则偏向低延迟才会带来 ...

  2. windows下用GCC编译DLL

    此程序有3个文件,分别为 export.h .export.c .main.c export.h 文件内容 /*此头很有必要,别人在调用的时候知道有哪些方法*/ #ifdef BUILD_DLL #d ...

  3. Apex 获取真正的IP地址

    代码如下 declare l_ip varchar2(15); begin if OWA_UTIL.GET_CGI_ENV('X-FORWARDED-FOR') is not null then l_ ...

  4. 如何处理scrum中未完成的用户故事?

    你听过柏林新建机场的故事吗?机场原定2006年开工,2007年启用,但由于机场建设过程中到处出现施工和安全问题,补东墙漏西墙,导致工期一拖再拖,预算一涨再涨,以至于2019年了还没开张,预计开业时间已 ...

  5. .NET使用Bogus生成大量随机数据

    .NET如何生成大量随机数据 在演示Demo.数据库脱敏.性能测试中,有时需要生成大量随机数据.Bogus就是.NET中优秀的高性能.合理.支持多语言的随机数据生成库. Bogus的Github链接: ...

  6. 最小生成树详细讲解(一看就懂!) & kruskal算法

    0.前言 因为本人太蒟了 我现在连NOIP的初赛都在胆战心惊 并且我甚至连最小生成树都没有学过 所以这一篇博客一定是最详细的QAQ 哈哈 请您认真看完如果有疏漏之处敬请留言指正 感谢! Thanks♪ ...

  7. 《Java 8 in Action》Chapter 3:Lambda表达式

    1. Lambda简介 可以把Lambda表达式理解为简洁地表示可传递的匿名函数的一种方式:它没有名称,但它有参数列表.函数主体.返回类型,可能还有一个可以抛出的异常列表. 匿名--我们说匿名,是因为 ...

  8. springboot搭建通用mapper

    对于搭建一个小项目自己测试玩如果采用传统的SSM框架配置起来太过于繁琐,使用springboot简化配置再搭配通用mapper简直不要太方便,话不多说,直接上代码. 首先是pom文件,直接去sprin ...

  9. scrapy学习(完全版)

    scrapy1.6中文文档 scrapy1.6中文文档 scrapy中文文档 Scrapy框架 下载页面 解析页面 并发 深度 安装 scrapy学习教程 如果安装了anconda,可以在anacon ...

  10. 分享各大CMS采集资源站网址合集

    分享各大CMS采集资源站网址合集 http://www.172zy.xyz/ 172云资源 http://www.dbzyz.com/ 豆瓣云资源 http://www.gaoqingzy.com/ ...