Log4j – Configuring Log4j 2 - Apache Log4j 2 https://logging.apache.org/log4j/2.x/manual/configuration.html

log4j2 实际使用详解 - CSDN博客 https://blog.csdn.net/vbirdbest/article/details/71751835

Apache log4j 1.2 - Frequently Asked Technical Questions http://logging.apache.org/log4j/1.2/faq.html#noconfig

log4j uses Thread.getContextClassLoader().getResource() to locate the default configuration files and does not directly check the file system.

[INFO] ------------------------------------------------------------------------
target
├── archive-tmp
├── classes
│   ├── com
│   │   └── mycom
│   │   ├── ArrayListExample.class
│   │   ├── ArrayListLinkedListExample.class
│   │   ├── LinkedListExample.class
│   │   ├── log4jFlume.class
│   │   ├── Log4jTest.class
│   │   ├── MyMR.class
│   │   ├── SparkWC.class
│   │   ├── TestMy.class
│   │   ├── TTSmy.class
│   │   ├── WordCount.class
│   │   ├── WordCountImprove.class
│   │   ├── WordCountImprove$IntSumReducer.class
│   │   ├── WordCountImprove$TokenizerMapper.class
│   │   ├── WordCountImprove$TokenizerMapper$CountersEnum.class
│   │   ├── WordCount$IntSumReducer.class
│   │   └── WordCount$TokenizerMapper.class
│   └── log4j.properties
├── generated-sources
│   └── annotations
├── maven-archiver
│   └── pom.properties
├── MyAid-1.0.0.jar
├── MyAid-1.0.0-jar-with-dependencies.jar
└── surefire 8 directories, 20 files
[root@hadoop3 MyBgJavaLan]# java -jar target/MyAid-1.0.0-jar-with-dependencies.jar com.mycom.Log4jTest
123
[INFO ] 2018-07-15 16:32:44,599 method:com.mycom.Log4jTest.main(Log4jTest.java:12)
my-info
[DEBUG] 2018-07-15 16:32:44,601 method:com.mycom.Log4jTest.main(Log4jTest.java:13)
my-debug
[ERROR] 2018-07-15 16:32:44,601 method:com.mycom.Log4jTest.main(Log4jTest.java:14)
my-error
[root@hadoop3 MyBgJavaLan]# ll -as
总用量 40
0 drwxr-xr-x 4 root root 87 7月 15 16:31 .
4 drwxr-xr-x. 12 root root 4096 7月 15 15:40 ..
12 -rw-r--r-- 1 root root 10452 7月 15 15:26 mynote.txt
12 -rw-r--r-- 1 root root 10445 7月 15 15:49 pom.xml
12 -rw-r--r-- 1 root root 9025 7月 10 19:58 pom.xml.BAK.txt
0 drwxr-xr-x 3 root root 18 7月 15 16:31 src
0 drwxr-xr-x 7 root root 171 7月 15 16:32 target
[root@hadoop3 MyBgJavaLan]# ll -as /home/jLog/
总用量 12
0 drwxr-xr-x 2 root root 36 7月 15 16:32 .
4 drwxr-xr-x. 12 root root 4096 7月 15 15:40 ..
4 -rw-r--r-- 1 root root 160 7月 15 16:32 D.log
4 -rw-r--r-- 1 root root 54 7月 15 16:32 error.log
[root@hadoop3 MyBgJavaLan]# tree src
src
└── main
├── java
│   └── com
│   └── mycom
│   ├── ArrayListExample.java
│   ├── ArrayListLinkedListExample.java
│   ├── LinkedListExample.java
│   ├── log4jFlume.java
│   ├── Log4jTest.java
│   ├── MyMR.java
│   ├── SparkWC.java
│   ├── TestMy.java
│   ├── TTSmy.java
│   ├── WordCountImprove.java
│   └── WordCount.java
└── resources
└── log4j.properties 5 directories, 12 files
[root@hadoop3 MyBgJavaLan]#

  

package com.mycom;

import org.apache.log4j.Logger;
import org.apache.log4j.PropertyConfigurator; public class Log4jTest { private static Logger logger = Logger.getLogger(Log4jTest.class); public static void main(String[] args) {
System.out.println("123");
logger.info("my-info");
logger.debug("my-debug");
logger.error("my-error");
} }

  

#log4j.rootLogger=INFO
#log4j.category.com.mycom=INFO,flume
#log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
#log4j.appender.flume.Hostname=localhost
#log4j.appender.flume.Port=44444
#log4j.appender.flume.UnsafeMode=true
### 设置###
log4j.rootLogger=debug,stdout,D,E
### 输出信息到控制抬 ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} method:%l%n%m%n
### 输出DEBUG 级别以上的日志到=E://logs/error.log ###
log4j.appender.D=org.apache.log4j.DailyRollingFileAppender
log4j.appender.D.File=/home/jLog/D.log
log4j.appender.D.Append=true
log4j.appender.D.Threshold=DEBUG
log4j.appender.D.layout=org.apache.log4j.PatternLayout
log4j.appender.D.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
### 输出ERROR 级别以上的日志到=E://logs/error.log ###
log4j.appender.E=org.apache.log4j.DailyRollingFileAppender
log4j.appender.E.File=/home/jLog/error.log
log4j.appender.E.Append=true
log4j.appender.E.Threshold=ERROR
log4j.appender.E.layout=org.apache.log4j.PatternLayout
log4j.appender.E.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n

  

        <!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-core -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.10.0</version>
</dependency>

  

log4j直接输出日志到flume 
############
[root@hadoop3 apache-flume-1.8.-bin]# ll -as
总用量
drwxr-xr-x root root 7月 : .
drwxr-xr-x root root 7月 : ..
drwxr-xr-x root root 7月 : bin
-rw-r--r-- root root 9月 CHANGELOG
drwxr-xr-x root root 7月 : conf
-rw-r--r-- root root 9月 DEVNOTES
-rw-r--r-- root root 9月 doap_Flume.rdf
drwxr-xr-x root root 9月 docs
drwxr-xr-x root root 7月 : lib
-rw-r--r-- root root 9月 LICENSE
-rw-r--r-- root root 9月 NOTICE
-rw-r--r-- root root 9月 README.md
-rw-r--r-- root root 9月 RELEASE-NOTES
drwxr-xr-x root root 7月 : tools
[root@hadoop3 apache-flume-1.8.-bin]# tree conf/
conf/
├── flume-conf.properties.template
├── flume-env.ps1.template
├── flume-env.sh.template
└── log4j.properties directories, files
[root@hadoop3 apache-flume-1.8.-bin]# ############
[root@hadoop3 apache-flume-1.8.0-bin]# tree conf/
conf/
├── flume-conf.properties.template
├── flume-env.ps1.template
├── flume-env.sh.template
└── log4j.properties 0 directories, 4 files
[root@hadoop3 apache-flume-1.8.0-bin]# cat conf/log4j.properties
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# # Define some default values that can be overridden by system properties.
#
# For testing, it may also be convenient to specify
# -Dflume.root.logger=DEBUG,console when launching flume. #flume.root.logger=DEBUG,console
flume.root.logger=INFO,LOGFILE
flume.log.dir=./logs
flume.log.file=flume.log log4j.logger.org.apache.flume.lifecycle = INFO
log4j.logger.org.jboss = WARN
log4j.logger.org.mortbay = INFO
log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
log4j.logger.org.apache.hadoop = INFO
log4j.logger.org.apache.hadoop.hive = ERROR # Define the root logger to the system property "flume.root.logger".
log4j.rootLogger=${flume.root.logger} # Stock log4j rolling file appender
# Default log rotation configuration
log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.LOGFILE.MaxFileSize=100MB
log4j.appender.LOGFILE.MaxBackupIndex=10
log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n # Warning: If you enable the following appender it will fill up your disk if you don't have a cleanup job!
# This uses the updated rolling file appender from log4j-extras that supports a reliable time-based rolling policy.
# See http://logging.apache.org/log4j/companions/extras/apidocs/org/apache/log4j/rolling/TimeBasedRollingPolicy.html
# Add "DAILY" to flume.root.logger above if you want to use this
log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{yyyy-MM-dd}
log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
log4j.appender.DAILY.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n # console
# Add "console" to flume.root.logger above if you want to use this
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n
[root@hadoop3 apache-flume-1.8.0-bin]#

Flume 1.8.0 Developer Guide — Apache Flume http://flume.apache.org/FlumeDeveloperGuide.html

bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1

package com.mycom;

import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.api.RpcClient;
import org.apache.flume.api.RpcClientFactory;
import org.apache.flume.event.EventBuilder; import java.nio.charset.Charset; //http://flume.apache.org/FlumeDeveloperGuide.html public class MyAppFlume {
public static void main(String[] args) {
MyRpcClientFacade client = new MyRpcClientFacade();
// Initialize client with the remote Flume agent's host and port
client.init("hadoop3", 41414); // Send 10 events to the remote Flume agent. That agent should be configured to listen with an AvroSource.
String sampleData = "Hello Flume!";
for (int i = 0; i < 20; i++) {
client.sendDataToFlume(sampleData);
}
client.cleanUp();
}
} class MyRpcClientFacade {
private RpcClient client;
private String hostname;
private int port; public void init(String hostname, int port) {
// Setup the RPC connection
this.hostname = hostname;
this.port = port;
this.client = RpcClientFactory.getDefaultInstance(hostname, port);
// Use the following method to create a thrift client(instead of the above line);
// this.client=RpcClientFactory.getThriftInstance(hostname,port); } public void sendDataToFlume(String data) {
// Create a Flume Event object that encapsulate the sample data
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
// Send the event
try {
client.append(event);
} catch (EventDeliveryException e) {
// clean up and recreate the client
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
}
} public void cleanUp() {
// Close the RPC connection
client.close();
}
}
# The configuration file needs to define the sources,
# the channels and the sinks.
# Sources, channels and sinks are defined per agent,
# in this case called 'agent' agent1.channels.ch1.type = memory agent1.sources.avro-source1.channels = ch1
agent1.sources.avro-source1.type = avro
agent1.sources.avro-source1.bind = 0.0.0.0
agent1.sources.avro-source1.port = 41414 agent1.sinks.log-sink1.channel = ch1
agent1.sinks.log-sink1.type = logger agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = log-sink1 [root@hadoop3 myBg]# cat apache-flume-1.8.0-bin/conf/flume.conf
[root@hadoop3 MyBgJavaLan]# java -classpath target/MyAid-1.0.0-jar-with-dependencies.jar  com.mycom.MyAppFlume
[DEBUG] 2018-07-16 17:09:00,072 method:org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:498)
Batch size string = 0
[WARN ] 2018-07-16 17:09:00,076 method:org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:504)
Invalid value for batchSize: 0; Using default value.
[WARN ] 2018-07-16 17:09:00,083 method:org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:634)
Using default maxIOWorkers
[DEBUG] 2018-07-16 17:09:00,129 method:org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:195)
Using Netty bootstrap options: {connectTimeoutMillis=20000, tcpNoDelay=true}
[DEBUG] 2018-07-16 17:09:00,130 method:org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:252)
Connecting to hadoop3/192.168.3.103:41414
[DEBUG] 2018-07-16 17:09:00,148 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0] OPEN
[DEBUG] 2018-07-16 17:09:00,206 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 => hadoop3/192.168.3.103:41414] BOUND: /192.168.3.103:36724
[DEBUG] 2018-07-16 17:09:00,206 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 => hadoop3/192.168.3.103:41414] CONNECTED: hadoop3/192.168.3.103:41414
[DEBUG] 2018-07-16 17:09:00,435 method:org.apache.avro.ipc.NettyTransceiver.disconnect(NettyTransceiver.java:314)
Disconnecting from hadoop3/192.168.3.103:41414
[DEBUG] 2018-07-16 17:09:00,436 method:org.apache.avro.ipc.NettyTransceiver.disconnect(NettyTransceiver.java:336)
Removing 1 pending request(s).
[DEBUG] 2018-07-16 17:09:00,438 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] DISCONNECTED
[DEBUG] 2018-07-16 17:09:00,439 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] UNBOUND
[DEBUG] 2018-07-16 17:09:00,440 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] CLOSED
[DEBUG] 2018-07-16 17:09:00,440 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:495)
Remote peer hadoop3/192.168.3.103:41414 closed connection.
[root@hadoop3 MyBgJavaLan]#

  

2018-07-16 17:08:21,448 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:127)] Checking file:conf/flume.conf for changes
2018-07-16 17:08:51,448 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:127)] Checking file:conf/flume.conf for changes
2018-07-16 17:09:00,202 (New I/O server boss #5) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 => /192.168.3.103:41414] OPEN
2018-07-16 17:09:00,203 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 => /192.168.3.103:41414] BOUND: /192.168.3.103:41414
2018-07-16 17:09:00,203 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 => /192.168.3.103:41414] CONNECTED: /192.168.3.103:36724
2018-07-16 17:09:00,391 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,391 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,405 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,406 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,407 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,407 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,408 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,409 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,410 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,410 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,411 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,412 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,413 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,413 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,415 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,415 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,417 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,417 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,418 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,419 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,420 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,420 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,422 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,422 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,424 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,424 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,425 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,425 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,426 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,427 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,428 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,428 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,429 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,430 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,431 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,431 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,432 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,432 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,434 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,434 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,437 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 :> /192.168.3.103:41414] DISCONNECTED
2018-07-16 17:09:00,439 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 :> /192.168.3.103:41414] UNBOUND
2018-07-16 17:09:00,439 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 :> /192.168.3.103:41414] CLOSED
2018-07-16 17:09:00,439 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.channelClosed(NettyServer.java:209)] Connection to /192.168.3.103:36724 disconnected.
2018-07-16 17:09:21,448 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:127)] Checking file:conf/flume.conf for changes
2018-07-16 17:09:51,449 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:127)] Checking file:conf/flume.conf for changes

  

[root@hadoop3 apache-flume-1.8.0-bin]# ll -as  /home/jLog/
总用量 8
0 drwxr-xr-x 2 root root 36 7月 16 17:09 .
4 drwxr-xr-x. 12 root root 4096 7月 15 15:40 ..
4 -rw-r--r-- 1 root root 1551 7月 16 17:09 D.log
0 -rw-r--r-- 1 root root 0 7月 16 17:09 error.log
[root@hadoop3 apache-flume-1.8.0-bin]# cat /home/jLog/error.log
[root@hadoop3 apache-flume-1.8.0-bin]# cat /home/jLog/D.log
2018-07-16 17:09:00 [ main:0 ] - [ DEBUG ] Batch size string = 0
2018-07-16 17:09:00 [ main:4 ] - [ WARN ] Invalid value for batchSize: 0; Using default value.
2018-07-16 17:09:00 [ main:11 ] - [ WARN ] Using default maxIOWorkers
2018-07-16 17:09:00 [ main:57 ] - [ DEBUG ] Using Netty bootstrap options: {connectTimeoutMillis=20000, tcpNoDelay=true}
2018-07-16 17:09:00 [ main:58 ] - [ DEBUG ] Connecting to hadoop3/192.168.3.103:41414
2018-07-16 17:09:00 [ main:76 ] - [ DEBUG ] [id: 0x5aeca4d0] OPEN
2018-07-16 17:09:00 [ New I/O worker #1:134 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 => hadoop3/192.168.3.103:41414] BOUND: /192.168.3.103:36724
2018-07-16 17:09:00 [ New I/O worker #1:134 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 => hadoop3/192.168.3.103:41414] CONNECTED: hadoop3/192.168.3.103:41414
2018-07-16 17:09:00 [ main:363 ] - [ DEBUG ] Disconnecting from hadoop3/192.168.3.103:41414
2018-07-16 17:09:00 [ main:364 ] - [ DEBUG ] Removing 1 pending request(s).
2018-07-16 17:09:00 [ New I/O worker #1:366 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] DISCONNECTED
2018-07-16 17:09:00 [ New I/O worker #1:367 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] UNBOUND
2018-07-16 17:09:00 [ New I/O worker #1:368 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] CLOSED
2018-07-16 17:09:00 [ New I/O worker #1:368 ] - [ DEBUG ] Remote peer hadoop3/192.168.3.103:41414 closed connection.
[root@hadoop3 apache-flume-1.8.0-bin]#

  

log Configuration的更多相关文章

  1. 黄聪:Microsoft Enterprise Library 5.0 系列教程(十) Configuration Application Block

    原文:黄聪:Microsoft Enterprise Library 5.0 系列教程(十) Configuration Application Block 到目前为止,我们使用的模块都是在同一个配置 ...

  2. mysql报错Ignoring the redo log due to missing MLOG_CHECKPOINT between

    mysql报错Ignoring the redo log due to missing MLOG_CHECKPOINT between mysql版本:5.7.19 系统版本:centos7.3 由于 ...

  3. Jmeter-Maven-Plugin高级应用:Log Levels

    Log Levels Pages 12 Home Adding additional libraries to the classpath Advanced Configuration Basic C ...

  4. Apache Kafka源码分析 – Log Management

    LogManager LogManager会管理broker上所有的logs(在一个log目录下),一个topic的一个partition对应于一个log(一个log子目录)首先loadLogs会加载 ...

  5. 使用ganglia监控hadoop及hbase集群

    一.Ganglia简介 Ganglia 是 UC Berkeley 发起的一个开源监视项目,设计用于测量数以千计的节点.每台计算机都运行一个收集和发送度量数据(如处理器速度.内存使用量等)的名为 gm ...

  6. Tomcat基本入门知识及发布,虚拟访问及启动碰到的错误,虚拟目录,虚拟路径,各种Tomcat的配置

    Tomcat容器入门介绍 转自javaresearch.com由timgball 整理 Tomcat是一个免费的开源Web服务器,最新版本是5.5.1,支持Servlet2.4,JSP2.0,非常适合 ...

  7. kafka0.9.0及0.10.0配置属性

    问题导读1.borker包含哪些属性?2.Producer包含哪些属性?3.Consumer如何配置?borker(0.9.0及0.10.0)配置Kafka日志本身是由多个日志段组成(log segm ...

  8. ModSecurity web application firewall (WAF) Research

    catalog . 引言 . OWASP ModSecurity Core Rule Set (CRS) Project . Installation mod_security for Apache ...

  9. 使用pgstatspack分析PostgreSQL数据库性能

    pgstatspack [root@test01 soft]# wget http://pgfoundry.org/frs/download.php/3151/pgstatspack_version_ ...

随机推荐

  1. 刷题总结——系列维护(ssoi)

    题目: 题解: 题解如上图,至于计算大于s的数字的数量和小于s数字的和用权值线段树或者树状数组维护就行了···注意离散化 另外发现cout和puts比printf快好多····· 代码: #inclu ...

  2. XPosed框架_简单的应用

    0. Xposed框架简介 关于Xposed框架相信大家应该不陌生了,他是Android中Hook技术的一个著名的框架,而Xposed框架是免费的而且还是开源的,本文主要介绍如何通过这个框架来进行系统 ...

  3. mybatis学习(一)——mybatis简介

    1.简介 MyBatis 本是apache的一个开源项目iBatis, 2010年这个项目由apache software foundation 迁移到了google code,并且改名为MyBati ...

  4. javascript 日期处理类库 moment.js

  5. 【黑科技】读写优化 orz bdd

    转自 bdd :http://www.cnblogs.com/kevince/p/3924688.html 读入优化: inline int read() { char ch; bool flag = ...

  6. 标准C程序设计七---13

    Linux应用             编程深入            语言编程 标准C程序设计七---经典C11程序设计    以下内容为阅读:    <标准C程序设计>(第7版) 作者 ...

  7. grequests----golang的requests库

    github.com/levigross/grequests: A Go "clone" of the great and famous Requests library 特点: ...

  8. hdu4888 多校B 最大流以及最大流唯一判断+输出方案

    题意,给一个矩阵,告诉你每行和.每列和,并且限制所填数不大于k,问矩阵是否唯一. 经典建图不说了,第一次遇到判断最大流唯一性的,学习了:用dfs来判断残网中是否还存在环,若存在,则表明绕这个环走一圈, ...

  9. I.Tower Defense

    给你p个重塔,q个轻塔,把这些塔放在n*m的图中,这些塔会相互攻击同行同列的,轻塔不能受到攻击,重塔能承受一个塔的攻击, 问放的方法数. 先假定n < m. 可以先枚举放轻塔的个数为s,显然,方 ...

  10. InitializingBean

    org.springframework.beans.factory包下有一个接口是InitializingBean 只有一个方法: /**  * Invoked by a BeanFactory af ...