It is finally here: you can configure the open source log-aggregator, scribe, to log data directly into the Hadoop distributed file system.

Many Web 2.0 companies have to deploy a bunch of costly filers to capture weblogs being generated by their application. Currently, there is no option other than a costly filer because the write-rate for this stream is huge. The Hadoop-Scribe integration allows this write-load to be distributed among a bunch of commodity machines, thus reducing the total cost of this infrastructure.

The challenge was to make HDFS be real-timeish in behaviour. Scribe uses libhdfs which is the C-interface to the HDFs client. There were various bugs in libhdfs that needed to be solved first. Then came the FileSystem API. One of the major issues was that the FileSystem API caches FileSystem handles and always returned the same FileSystem handle when called from multiple threads. There was no reference counting of the handle. This caused problems with scribe, because Scribe is highly multi-threaded. A new API FileSystem.newInstance() was introduced to support Scribe.

Making the HDFS write code path more real-time was painful. There are various timeouts/settings in HDFS that were hardcoded and needed to be changed to allow the application to fail fast. At the bottom of this blog-post, I am attaching the settings that we have currently configured to make the HDFS-write very real-timeish. The last of the JIRAS, HADOOP-2757 is in the pipeline to be committed to Hadoop trunk very soon.

What about Namenode being the single point of failure? This is acceptable in a warehouse type of application but cannot be tolerated by a realtime application. Scribe typically aggregates click-logs from a bunch of webservers, and losing *all* click log data of a website for a 10 minutes or so (minimum time for a namenode restart) cannot be tolerated. The solution is to configure two overlapping clusters on the same hardware. Run two separate namenodes N1 and N2 on two different machines. Run one set of datanode software on all slave machines that report to N1 and the other set of datanode software on the same set of slave machines that report to N2. The two datanode instances on a single slave machine share the same data directories. This configuration allows HDFS to be highly available for writes!

The highly-available-for-writes-HDFS configuration is also required for software upgrades on the cluster. We can shutdown one of the overlapping HDFS clusters, upgrade it to new hadoop software, and then put it back online before starting the same process for the second HDFS cluster.

What are the main changes to scribe that were needed? Scribe already had the feature that it buffers data when it is unable to write to the configured storage. The default scribe behaviour is to replay this buffer back to the storage when the storage is back online. Scribe is configured to support no-buffer-replay when the primary storage is back online. Scribe-hdfs is configured to write data to a cluster N1 and if N1 fails then it writes data to cluster N2. Scribe treats N1 and N2 as two equivalent primary stores.

转自:http://hadoopblog.blogspot.hk/2009/06/hdfs-scribe-integration.html

HDFS Scribe Integration 【转】的更多相关文章

  1. Scribe+HDFS日志收集系统安装方法

    1.概述 Scribe是facebook开源的日志收集系统,可用于搜索引擎中进行大规模日志分析处理.其通常与Hadoop结合使用,scribe用于向HDFS中push日志,而Hadoop通过MapRe ...

  2. 【转载】scribe、chukwa、kafka、flume日志系统对比

    原文地址:http://www.ttlsa.com/log-system/scribe-chukwa-kafka-flume-log-system-contrast/ 1. 背景介绍许多公司的平台每天 ...

  3. Scribe日志收集工具

    Scribe日志收集工具 概述 Scribe是facebook开源的日志收集系统,在facebook内部已经得到大量的应用.它能够从各种日志源上收集日志,存储到一个中央存储系统(可以是NFS,分布式文 ...

  4. Linux System Log Collection、Log Integration、Log Analysis System Building Learning

    目录 . 为什么要构建日志系统 . 通用日志系统的总体架构 . 日志系统的元数据来源:data source . 日志系统的子安全域日志收集系统:client Agent . 日志系统的中心日志整合系 ...

  5. syslog syslog-ng rsyslog flume scribe 各种尝试

    1. syslog概念 syslog本身是一种协议, 一个用来描述系统日志格式的协议, 当前的协议包括三部分: 如下面是一个syslog消息: <30>Oct 9 22:33:20 hlf ...

  6. kettle连接hadoop&hdfs图文详解

    1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...

  7. scribe、chukwa、kafka、flume日志系统对比 -摘自网络

    1. 背景介绍许多公司的平台每天会产生大量的日志(一般为流式数据,如,搜索引擎的pv,查询等),处理这些日志需要特定的日志系统,一般而言,这些系统需要具有以下特征:(1) 构建应用系统和分析系统的桥梁 ...

  8. Loading Data into HDFS

    How to use a PDI job to move a file into HDFS. Prerequisites In order to follow along with this how- ...

  9. 大数据应用日志采集之Scribe演示实例完全解析

    大数据应用日志采集之Scribe演示实例完全解析 引子: Scribe是Facebook开源的日志收集系统,在Facebook内部已经得到大量的应用.它能够从各种日志源上收集日志,存储到一个中央存储系 ...

随机推荐

  1. vim编辑器最简单使用方法

    i 输入模式 :q 不保存退出 :q! 强制退出 :wq 保存退出 j 下 k 上 h 左 l 右 gg start G end x 往后删 X 往前删 yy 复制行 p 粘贴 dd 剪切行 u 撤销 ...

  2. poj 3308 Paratroopers(二分图最小点权覆盖)

    Paratroopers Time Limit: 1000MS   Memory Limit: 65536K Total Submissions: 8954   Accepted: 2702 Desc ...

  3. python中pip 出错

    错误:error in launcher: Unable to create process using '" python多个版本时出现, 解决方法-- 将pip重新安装 python3 ...

  4. Hyper-V 网络虚拟化技术细节

    Hyper-V 网络虚拟化技术细节 适用对象:Windows Server 2012 R2 服务器虚拟化能让多个服务器实例在同一台物理主机上同步运行,但各个服务器实例都是相互独立的. 每台虚拟机的运作 ...

  5. python学习-- Django Ajax CSRF 认证

    使用 jQuery 的 ajax 或者 post 之前 加入这个 js 代码:http://www.ziqiangxuetang.com/media/django/csrf.js /*======== ...

  6. python re模块详解

    re模块 re模块使用python拥有全部的正则表达式功能 1 2 3 4 re.I(re.IGNORECASE): 忽略大小写(括号内是完整写法)  re.M(MULTILINE):(多行模式,改变 ...

  7. PAT——乙级1032

    这些题也确实简单,但是我还是想做做,多熟悉一下C++,毕竟实践是检验真理的唯一标准,有很多小知识点自己做了才知道. 这个题是 1032 挖掘机技术哪家强 (20 point(s)) 为了用事实说明挖掘 ...

  8. sqlserver中top 1 赋值的问题

    看代码 declare @iid intselect @iid=111select top 1 @iid=isnull(IID,0) from YYGL_PCDMX where IID=0print ...

  9. python 打印9*9乘法表

    # -*- coding: utf8 -*- # Author:wxq 1. for i in range(1,10): for j in range(1,i+1): print "%d*% ...

  10. python3.x与python2.x的区别(转)

    转自:http://www.cnblogs.com/codingmylife/archive/2010/06/06/1752807.html 1.性能 Py3.0运行 pystone benchmar ...