kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA
2017-08-16 11:57:28,237 WARN [org.apache.hadoop.hdfs.LeaseRenewer][458] - <Failed to renew lease for [DFSClient_NONMAPREDUCE_-1756242047_26] for 30 seconds. Will retry shortly ...>
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1826)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1404)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4968)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:875)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.renewLease(AuthorizationProviderProxyClientProtocol.java:357)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:633)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy50.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy51.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:879)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:745)
提示信息中的网址说的很清楚https://s.apache.org/sbnn-error
3.17. What does the message "Operation category READ/WRITE is not supported in state standby" mean? In an HA-enabled cluster, DFS clients cannot know in advance which namenode is active at a given time. So when a client contacts a namenode and it happens to be the standby, the READ or WRITE operation will be refused and this message is logged. The client will then automatically contact the other namenode and try the operation again. As long as there is one active and one standby namenode in the cluster, this message can be safely ignored. If an application is configured to contact only one namenode always, this message indicates that the application is failing to perform any read/write operation. In such situations, the application would need to be modified to use the HA configuration for the cluster. The jira HDFS-3447 deals with lowering the severity of this message (and similar ones) to DEBUG so as to reduce noise in the logs, but is unresolved as of July 2015.
kafka-connect-hdfs中操作hdfs的HdfsStorage.class中需要做修改
/**
* Copyright 2015 Confluent Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
**/ package io.confluent.connect.hdfs.storage; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PathFilter;
import org.apache.kafka.common.TopicPartition; import java.io.IOException;
import java.net.URI; import io.confluent.connect.hdfs.wal.FSWAL;
import io.confluent.connect.hdfs.wal.WAL; public class HdfsStorage implements Storage { private final FileSystem fs;
private final Configuration conf;
private final String url; public HdfsStorage(Configuration conf, String url) throws IOException {
//fs = FileSystem.newInstance(URI.create(url), conf);原来的
fs = FileSystem.newInstance(conf);修改后的
this.conf = conf;
this.url = url;
} @Override
public FileStatus[] listStatus(String path, PathFilter filter) throws IOException {
return fs.listStatus(new Path(path), filter);
} @Override
public FileStatus[] listStatus(String path) throws IOException {
return fs.listStatus(new Path(path));
} @Override
public void append(String filename, Object object) throws IOException { } @Override
public boolean mkdirs(String filename) throws IOException {
return fs.mkdirs(new Path(filename));
} @Override
public boolean exists(String filename) throws IOException {
return fs.exists(new Path(filename));
} @Override
public void commit(String tempFile, String committedFile) throws IOException {
renameFile(tempFile, committedFile);
} @Override
public void delete(String filename) throws IOException {
fs.delete(new Path(filename), true);
} @Override
public void close() throws IOException {
if (fs != null) {
fs.close();
}
} @Override
public WAL wal(String topicsDir, TopicPartition topicPart) {
return new FSWAL(topicsDir, topicPart, this);
} @Override
public Configuration conf() {
return conf;
} @Override
public String url() {
return url;
} private void renameFile(String sourcePath, String targetPath) throws IOException {
if (sourcePath.equals(targetPath)) {
return;
}
final Path srcPath = new Path(sourcePath);
final Path dstPath = new Path(targetPath);
if (fs.exists(srcPath)) {
fs.rename(srcPath, dstPath);
}
}
}
当然 url的相应配置得改成hdfs://nameservice/*,因为要HA 啊。不能按照原来的要求了,原来的要求如下:
// HDFS Group
public static final String HDFS_URL_CONFIG = "hdfs.url";
private static final String HDFS_URL_DOC =
"The HDFS connection URL. This configuration has the format of hdfs:://hostname:port and "
+ "specifies the HDFS to export data to.";
private static final String HDFS_URL_DISPLAY = "HDFS URL";
虽然实例化storage时候不用url了,往hive load还是要的。
url = connectorConfig.getString(HdfsSinkConnectorConfig.HDFS_URL_CONFIG);
topicsDir = connectorConfig.getString(HdfsSinkConnectorConfig.TOPICS_DIR_CONFIG);
String logsDir = connectorConfig.getString(HdfsSinkConnectorConfig.LOGS_DIR_CONFIG); @SuppressWarnings("unchecked")
Class<? extends Storage> storageClass = (Class<? extends Storage>) Class
.forName(connectorConfig.getString(HdfsSinkConnectorConfig.STORAGE_CLASS_CONFIG));
storage = StorageFactory.createStorage(storageClass, conf, url);
kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA
kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA的更多相关文章
- kettle连接hadoop&hdfs图文详解
1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...
- kettle入门(三) 之kettle连接hadoop&hdfs图文详解(转)
1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...
- 使用kafka connect,将数据批量写到hdfs完整过程
版权声明:本文为博主原创文章,未经博主允许不得转载 本文是基于hadoop 2.7.1,以及kafka 0.11.0.0.kafka-connect是以单节点模式运行,即standalone. 首先, ...
- Kafka connect快速构建数据ETL通道
摘要: 作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 业余时间调研了一下Kafka connect的配置和使用,记录一些自己的理解和心得,欢迎 ...
- Kafka Connect HDFS
概述 Kafka 的数据如何传输到HDFS?如果仔细思考,会发现这个问题并不简单. 不妨先想一下这两个问题? 1)为什么要将Kafka的数据传输到HDFS上? 2)为什么不直接写HDFS而要通过Kaf ...
- kafka-connect-hdfs重启,进去RECOVERY状态,从hadoop hdfs拿租约,很正常,但是也太久了吧
虽说这个算是正常现象,等的时间也太久了吧.分钟级了.这个RECOVERY里面的WAL有点多余.有这么久的时间,早从新读取kafka写入hdfs了.纯属个人见解. @SuppressWarnings(& ...
- 使用python来访问Hadoop HDFS存储实现文件的操作
原文:http://rfyiamcool.blog.51cto.com/1030776/1258292 在调试环境下,咱们用hadoop提供的shell接口测试增加删除查看,但是不利于复杂的逻辑编程 ...
- hadoop hdfs 有内网、公网ip后,本地调试访问不了集群解决
问题背景: 使用云上的虚拟环境搭建测试集群,导入一些数据,在本地idea做些debug调试,但是发现本地idea连接不上测试环境 集群内部配置hosts映射是内网映射(内网ip与主机名映射),本地只能 ...
- Hadoop HDFS 用户指南
This document is a starting point for users working with Hadoop Distributed File System (HDFS) eithe ...
随机推荐
- iframe关闭操作
关闭自定义 Div+Iframe弹窗 :window.parent.$("div的id/class/name").remove();//移除div 关闭Iframe弹窗:windo ...
- ls 指令的介绍
每个文件在linux下面都会记录许多的时间参数, 其实是有三个主要的变动时间,那么三个时间的意义是什么呢? modification time (mtime) : 当该文件的“内容数据”变更时,就会更 ...
- Centos 7.6配置nginx反向代理,直接yum安装
一,实验介绍 利用三台centos7虚拟机搭建简单的nginx反向代理负载集群, 三台虚拟机地址及功能介绍 192.168.2.76 nginx负载均衡器 192.168.2.82 web ...
- C语言异常处理之 setjmp()和longjmp()
异常处理之除0情况 相信大家处理除0时,都会通过函数,然后判断除数是否为0,代码如下所示: double divide(doublea,double b) { const double delta = ...
- 【Java每日一题】20170120
20170119问题解析请点击今日问题下方的“[Java每日一题]20170120”查看(问题解析在公众号首发,公众号ID:weknow619) package Jan2017; import jav ...
- Mysql中的外键分析(什么是外键,为什么要用外键,添加外键,主外键关联删除)
有一个东西一直在我脑海中是个很烦的东西,但是这东西不搞清楚会阻碍自己的前进.自己做项目demo永远只能用一张表... 所以今天还是学习了下外键希望能够搞明白一些... 百度上搜索外键的作用" ...
- 虚拟机与Docker有何不同?
译者按: 各种虚拟机技术开启了云计算时代:而Docker,作为下一代虚拟化技术,正在改变我们开发.测试.部署应用的方式.那虚拟机与Docker究竟有何不同呢? 原文: Comparing Virtua ...
- jQuery中$.ajax()方法参数解析
本文实例为大家讲解了jQuery $.ajax()方法参数,供大家参考,具体内容如下 $.ajax({ url:'test.do', data:{id:123,name:'xiaoming'}, ty ...
- Exception 和 Error 有什么区别么
声明 本篇所涉及的提问,正文的知识点,全都来自于杨晓峰的<Java核心技术36讲>,当然,我并不会全文照搬过来,毕竟这是付费的课程,应该会涉及到侵权之类的问题. 所以,本篇正文中的知识点, ...
- 2018-11-09 VS Code英汉词典插件v0.0.4-驼峰下划线命名
首先, 在两天时间内安装数破百, 多谢支持. VS Code插件市场地址: 英汉词典 - Visual Studio Marketplace 开源库地址同前文: Visual Studio Code插 ...