2017-08-16 11:57:28,237 WARN [org.apache.hadoop.hdfs.LeaseRenewer][458] - <Failed to renew lease for [DFSClient_NONMAPREDUCE_-1756242047_26] for 30 seconds.  Will retry shortly ...>
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1826)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1404)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4968)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:875)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.renewLease(AuthorizationProviderProxyClientProtocol.java:357)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:633)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy50.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy51.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:879)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:745)
提示信息中的网址说的很清楚https://s.apache.org/sbnn-error
3.17. What does the message "Operation category READ/WRITE is not supported in state standby" mean?

In an HA-enabled cluster, DFS clients cannot know in advance which namenode is active at a given time. So when a client contacts a namenode and it happens to be the standby, the READ or WRITE operation will be refused and this message is logged. The client will then automatically contact the other namenode and try the operation again. As long as there is one active and one standby namenode in the cluster, this message can be safely ignored.

If an application is configured to contact only one namenode always, this message indicates that the application is failing to perform any read/write operation. In such situations, the application would need to be modified to use the HA configuration for the cluster. The jira HDFS-3447 deals with lowering the severity of this message (and similar ones) to DEBUG so as to reduce noise in the logs, but is unresolved as of July 2015.

kafka-connect-hdfs中操作hdfs的HdfsStorage.class中需要做修改

/**
* Copyright 2015 Confluent Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
**/ package io.confluent.connect.hdfs.storage; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PathFilter;
import org.apache.kafka.common.TopicPartition; import java.io.IOException;
import java.net.URI; import io.confluent.connect.hdfs.wal.FSWAL;
import io.confluent.connect.hdfs.wal.WAL; public class HdfsStorage implements Storage { private final FileSystem fs;
private final Configuration conf;
private final String url; public HdfsStorage(Configuration conf, String url) throws IOException {
//fs = FileSystem.newInstance(URI.create(url), conf);原来的
fs = FileSystem.newInstance(conf);修改后的
this.conf = conf;
this.url = url;
} @Override
public FileStatus[] listStatus(String path, PathFilter filter) throws IOException {
return fs.listStatus(new Path(path), filter);
} @Override
public FileStatus[] listStatus(String path) throws IOException {
return fs.listStatus(new Path(path));
} @Override
public void append(String filename, Object object) throws IOException { } @Override
public boolean mkdirs(String filename) throws IOException {
return fs.mkdirs(new Path(filename));
} @Override
public boolean exists(String filename) throws IOException {
return fs.exists(new Path(filename));
} @Override
public void commit(String tempFile, String committedFile) throws IOException {
renameFile(tempFile, committedFile);
} @Override
public void delete(String filename) throws IOException {
fs.delete(new Path(filename), true);
} @Override
public void close() throws IOException {
if (fs != null) {
fs.close();
}
} @Override
public WAL wal(String topicsDir, TopicPartition topicPart) {
return new FSWAL(topicsDir, topicPart, this);
} @Override
public Configuration conf() {
return conf;
} @Override
public String url() {
return url;
} private void renameFile(String sourcePath, String targetPath) throws IOException {
if (sourcePath.equals(targetPath)) {
return;
}
final Path srcPath = new Path(sourcePath);
final Path dstPath = new Path(targetPath);
if (fs.exists(srcPath)) {
fs.rename(srcPath, dstPath);
}
}
}

当然 url的相应配置得改成hdfs://nameservice/*,因为要HA 啊。不能按照原来的要求了,原来的要求如下:

// HDFS Group
public static final String HDFS_URL_CONFIG = "hdfs.url";
private static final String HDFS_URL_DOC =
"The HDFS connection URL. This configuration has the format of hdfs:://hostname:port and "
+ "specifies the HDFS to export data to.";
private static final String HDFS_URL_DISPLAY = "HDFS URL";

虽然实例化storage时候不用url了,往hive load还是要的。

    url = connectorConfig.getString(HdfsSinkConnectorConfig.HDFS_URL_CONFIG);
topicsDir = connectorConfig.getString(HdfsSinkConnectorConfig.TOPICS_DIR_CONFIG);
String logsDir = connectorConfig.getString(HdfsSinkConnectorConfig.LOGS_DIR_CONFIG); @SuppressWarnings("unchecked")
Class<? extends Storage> storageClass = (Class<? extends Storage>) Class
.forName(connectorConfig.getString(HdfsSinkConnectorConfig.STORAGE_CLASS_CONFIG));
storage = StorageFactory.createStorage(storageClass, conf, url);

kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA

												

kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA的更多相关文章

  1. kettle连接hadoop&hdfs图文详解

    1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...

  2. kettle入门(三) 之kettle连接hadoop&hdfs图文详解(转)

    1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...

  3. 使用kafka connect,将数据批量写到hdfs完整过程

    版权声明:本文为博主原创文章,未经博主允许不得转载 本文是基于hadoop 2.7.1,以及kafka 0.11.0.0.kafka-connect是以单节点模式运行,即standalone. 首先, ...

  4. Kafka connect快速构建数据ETL通道

    摘要: 作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 业余时间调研了一下Kafka connect的配置和使用,记录一些自己的理解和心得,欢迎 ...

  5. Kafka Connect HDFS

    概述 Kafka 的数据如何传输到HDFS?如果仔细思考,会发现这个问题并不简单. 不妨先想一下这两个问题? 1)为什么要将Kafka的数据传输到HDFS上? 2)为什么不直接写HDFS而要通过Kaf ...

  6. kafka-connect-hdfs重启,进去RECOVERY状态,从hadoop hdfs拿租约,很正常,但是也太久了吧

    虽说这个算是正常现象,等的时间也太久了吧.分钟级了.这个RECOVERY里面的WAL有点多余.有这么久的时间,早从新读取kafka写入hdfs了.纯属个人见解. @SuppressWarnings(& ...

  7. 使用python来访问Hadoop HDFS存储实现文件的操作

    原文:http://rfyiamcool.blog.51cto.com/1030776/1258292 在调试环境下,咱们用hadoop提供的shell接口测试增加删除查看,但是不利于复杂的逻辑编程 ...

  8. hadoop hdfs 有内网、公网ip后,本地调试访问不了集群解决

    问题背景: 使用云上的虚拟环境搭建测试集群,导入一些数据,在本地idea做些debug调试,但是发现本地idea连接不上测试环境 集群内部配置hosts映射是内网映射(内网ip与主机名映射),本地只能 ...

  9. Hadoop HDFS 用户指南

    This document is a starting point for users working with Hadoop Distributed File System (HDFS) eithe ...

随机推荐

  1. [转]Raw Queries in Laravel

    本文转自:https://fideloper.com/laravel-raw-queries Business logic is often complicated. Because of this, ...

  2. team项目学习01

    项目里面有好多单词不理解.先查一下. authorize:授权,批准 controller:控制器 domain:领域,域 form:表格,形式,窗体 interceptor 拦截器,自定义动画渲染器 ...

  3. Layui table 组件的使用:初始化加载数据、数据刷新表格、传参数

    背景 笔者之前一直使用 bootstrap table ,因为当前项目中主要使用 Layui 框架,于是也就随了 Layui table ,只是在使用的时候出现了一些问题,当然也是怪自己不熟悉的锅吧! ...

  4. 【Java每日一题】20170221

    20170220问题解析请点击今日问题下方的“[Java每日一题]20170221”查看(问题解析在公众号首发,公众号ID:weknow619) package Feb2017; public cla ...

  5. JSJ——主数据类型和引用

    变量有两种:primitive主数据类型和引用. Java注重类型.它不会让你做出把长颈鹿类型变量装进兔子类型变量中这种诡异又危险的举动——如果有人对长颈鹿调用“跳跃”这个方法会发生什么悲剧?并且它也 ...

  6. 提高 JavaScript 开发效率的高级 VSCode 扩展!

    原文:提高 JavaScript 开发效率的高级 VSCode 扩展! 作者:前端小智 Fundebug经授权转载,版权归原作者所有. Quokka.js Quokka.js 是一个用于 JavaSc ...

  7. 2 >&1 的准确含义

    1. 2代表标准错误,2 > 表示重定向,就是把标准错误重定向到 1中,这个1如果想表示标准输出的话,就必须在前面加 & 2. 正常情况下,下面这个会有很多错误信息,但是加上2>& ...

  8. 「客户成功故事」OneAPM 助力网上办事大厅构建阳光、高效、安全的政务服务平台

    (一) 项目背景: 网上办事大厅是由省信息中心承建的电子政务核心业务系统,致力于为全省民众提供一站式网上办事服务,实现了政务信息网上公开.法人及个人事项网上办理.公共决策网上互动.政府效能网上监督五大 ...

  9. 项目初始化mysql建库和授权

    创建数据库和授权 mysql -e "create database DATABASE DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_gen ...

  10. 报错org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to org.apache.hadoop.mapred.FileSplit

    报错 java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSpli ...