Log4j – Configuring Log4j 2 - Apache Log4j 2 https://logging.apache.org/log4j/2.x/manual/configuration.html

log4j2 实际使用详解 - CSDN博客 https://blog.csdn.net/vbirdbest/article/details/71751835

Apache log4j 1.2 - Frequently Asked Technical Questions http://logging.apache.org/log4j/1.2/faq.html#noconfig

log4j uses Thread.getContextClassLoader().getResource() to locate the default configuration files and does not directly check the file system.

[INFO] ------------------------------------------------------------------------
target
├── archive-tmp
├── classes
│   ├── com
│   │   └── mycom
│   │   ├── ArrayListExample.class
│   │   ├── ArrayListLinkedListExample.class
│   │   ├── LinkedListExample.class
│   │   ├── log4jFlume.class
│   │   ├── Log4jTest.class
│   │   ├── MyMR.class
│   │   ├── SparkWC.class
│   │   ├── TestMy.class
│   │   ├── TTSmy.class
│   │   ├── WordCount.class
│   │   ├── WordCountImprove.class
│   │   ├── WordCountImprove$IntSumReducer.class
│   │   ├── WordCountImprove$TokenizerMapper.class
│   │   ├── WordCountImprove$TokenizerMapper$CountersEnum.class
│   │   ├── WordCount$IntSumReducer.class
│   │   └── WordCount$TokenizerMapper.class
│   └── log4j.properties
├── generated-sources
│   └── annotations
├── maven-archiver
│   └── pom.properties
├── MyAid-1.0.0.jar
├── MyAid-1.0.0-jar-with-dependencies.jar
└── surefire 8 directories, 20 files
[root@hadoop3 MyBgJavaLan]# java -jar target/MyAid-1.0.0-jar-with-dependencies.jar com.mycom.Log4jTest
123
[INFO ] 2018-07-15 16:32:44,599 method:com.mycom.Log4jTest.main(Log4jTest.java:12)
my-info
[DEBUG] 2018-07-15 16:32:44,601 method:com.mycom.Log4jTest.main(Log4jTest.java:13)
my-debug
[ERROR] 2018-07-15 16:32:44,601 method:com.mycom.Log4jTest.main(Log4jTest.java:14)
my-error
[root@hadoop3 MyBgJavaLan]# ll -as
总用量 40
0 drwxr-xr-x 4 root root 87 7月 15 16:31 .
4 drwxr-xr-x. 12 root root 4096 7月 15 15:40 ..
12 -rw-r--r-- 1 root root 10452 7月 15 15:26 mynote.txt
12 -rw-r--r-- 1 root root 10445 7月 15 15:49 pom.xml
12 -rw-r--r-- 1 root root 9025 7月 10 19:58 pom.xml.BAK.txt
0 drwxr-xr-x 3 root root 18 7月 15 16:31 src
0 drwxr-xr-x 7 root root 171 7月 15 16:32 target
[root@hadoop3 MyBgJavaLan]# ll -as /home/jLog/
总用量 12
0 drwxr-xr-x 2 root root 36 7月 15 16:32 .
4 drwxr-xr-x. 12 root root 4096 7月 15 15:40 ..
4 -rw-r--r-- 1 root root 160 7月 15 16:32 D.log
4 -rw-r--r-- 1 root root 54 7月 15 16:32 error.log
[root@hadoop3 MyBgJavaLan]# tree src
src
└── main
├── java
│   └── com
│   └── mycom
│   ├── ArrayListExample.java
│   ├── ArrayListLinkedListExample.java
│   ├── LinkedListExample.java
│   ├── log4jFlume.java
│   ├── Log4jTest.java
│   ├── MyMR.java
│   ├── SparkWC.java
│   ├── TestMy.java
│   ├── TTSmy.java
│   ├── WordCountImprove.java
│   └── WordCount.java
└── resources
└── log4j.properties 5 directories, 12 files
[root@hadoop3 MyBgJavaLan]#

  

package com.mycom;

import org.apache.log4j.Logger;
import org.apache.log4j.PropertyConfigurator; public class Log4jTest { private static Logger logger = Logger.getLogger(Log4jTest.class); public static void main(String[] args) {
System.out.println("123");
logger.info("my-info");
logger.debug("my-debug");
logger.error("my-error");
} }

  

#log4j.rootLogger=INFO
#log4j.category.com.mycom=INFO,flume
#log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
#log4j.appender.flume.Hostname=localhost
#log4j.appender.flume.Port=44444
#log4j.appender.flume.UnsafeMode=true
### 设置###
log4j.rootLogger=debug,stdout,D,E
### 输出信息到控制抬 ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} method:%l%n%m%n
### 输出DEBUG 级别以上的日志到=E://logs/error.log ###
log4j.appender.D=org.apache.log4j.DailyRollingFileAppender
log4j.appender.D.File=/home/jLog/D.log
log4j.appender.D.Append=true
log4j.appender.D.Threshold=DEBUG
log4j.appender.D.layout=org.apache.log4j.PatternLayout
log4j.appender.D.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
### 输出ERROR 级别以上的日志到=E://logs/error.log ###
log4j.appender.E=org.apache.log4j.DailyRollingFileAppender
log4j.appender.E.File=/home/jLog/error.log
log4j.appender.E.Append=true
log4j.appender.E.Threshold=ERROR
log4j.appender.E.layout=org.apache.log4j.PatternLayout
log4j.appender.E.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n

  

        <!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-core -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.10.0</version>
</dependency>

  

log4j直接输出日志到flume 
############
[root@hadoop3 apache-flume-1.8.-bin]# ll -as
总用量
drwxr-xr-x root root 7月 : .
drwxr-xr-x root root 7月 : ..
drwxr-xr-x root root 7月 : bin
-rw-r--r-- root root 9月 CHANGELOG
drwxr-xr-x root root 7月 : conf
-rw-r--r-- root root 9月 DEVNOTES
-rw-r--r-- root root 9月 doap_Flume.rdf
drwxr-xr-x root root 9月 docs
drwxr-xr-x root root 7月 : lib
-rw-r--r-- root root 9月 LICENSE
-rw-r--r-- root root 9月 NOTICE
-rw-r--r-- root root 9月 README.md
-rw-r--r-- root root 9月 RELEASE-NOTES
drwxr-xr-x root root 7月 : tools
[root@hadoop3 apache-flume-1.8.-bin]# tree conf/
conf/
├── flume-conf.properties.template
├── flume-env.ps1.template
├── flume-env.sh.template
└── log4j.properties directories, files
[root@hadoop3 apache-flume-1.8.-bin]# ############
[root@hadoop3 apache-flume-1.8.0-bin]# tree conf/
conf/
├── flume-conf.properties.template
├── flume-env.ps1.template
├── flume-env.sh.template
└── log4j.properties 0 directories, 4 files
[root@hadoop3 apache-flume-1.8.0-bin]# cat conf/log4j.properties
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# # Define some default values that can be overridden by system properties.
#
# For testing, it may also be convenient to specify
# -Dflume.root.logger=DEBUG,console when launching flume. #flume.root.logger=DEBUG,console
flume.root.logger=INFO,LOGFILE
flume.log.dir=./logs
flume.log.file=flume.log log4j.logger.org.apache.flume.lifecycle = INFO
log4j.logger.org.jboss = WARN
log4j.logger.org.mortbay = INFO
log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
log4j.logger.org.apache.hadoop = INFO
log4j.logger.org.apache.hadoop.hive = ERROR # Define the root logger to the system property "flume.root.logger".
log4j.rootLogger=${flume.root.logger} # Stock log4j rolling file appender
# Default log rotation configuration
log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.LOGFILE.MaxFileSize=100MB
log4j.appender.LOGFILE.MaxBackupIndex=10
log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n # Warning: If you enable the following appender it will fill up your disk if you don't have a cleanup job!
# This uses the updated rolling file appender from log4j-extras that supports a reliable time-based rolling policy.
# See http://logging.apache.org/log4j/companions/extras/apidocs/org/apache/log4j/rolling/TimeBasedRollingPolicy.html
# Add "DAILY" to flume.root.logger above if you want to use this
log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{yyyy-MM-dd}
log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
log4j.appender.DAILY.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n # console
# Add "console" to flume.root.logger above if you want to use this
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n
[root@hadoop3 apache-flume-1.8.0-bin]#

Flume 1.8.0 Developer Guide — Apache Flume http://flume.apache.org/FlumeDeveloperGuide.html

bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1

package com.mycom;

import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.api.RpcClient;
import org.apache.flume.api.RpcClientFactory;
import org.apache.flume.event.EventBuilder; import java.nio.charset.Charset; //http://flume.apache.org/FlumeDeveloperGuide.html public class MyAppFlume {
public static void main(String[] args) {
MyRpcClientFacade client = new MyRpcClientFacade();
// Initialize client with the remote Flume agent's host and port
client.init("hadoop3", 41414); // Send 10 events to the remote Flume agent. That agent should be configured to listen with an AvroSource.
String sampleData = "Hello Flume!";
for (int i = 0; i < 20; i++) {
client.sendDataToFlume(sampleData);
}
client.cleanUp();
}
} class MyRpcClientFacade {
private RpcClient client;
private String hostname;
private int port; public void init(String hostname, int port) {
// Setup the RPC connection
this.hostname = hostname;
this.port = port;
this.client = RpcClientFactory.getDefaultInstance(hostname, port);
// Use the following method to create a thrift client(instead of the above line);
// this.client=RpcClientFactory.getThriftInstance(hostname,port); } public void sendDataToFlume(String data) {
// Create a Flume Event object that encapsulate the sample data
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
// Send the event
try {
client.append(event);
} catch (EventDeliveryException e) {
// clean up and recreate the client
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
}
} public void cleanUp() {
// Close the RPC connection
client.close();
}
}
# The configuration file needs to define the sources,
# the channels and the sinks.
# Sources, channels and sinks are defined per agent,
# in this case called 'agent' agent1.channels.ch1.type = memory agent1.sources.avro-source1.channels = ch1
agent1.sources.avro-source1.type = avro
agent1.sources.avro-source1.bind = 0.0.0.0
agent1.sources.avro-source1.port = 41414 agent1.sinks.log-sink1.channel = ch1
agent1.sinks.log-sink1.type = logger agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = log-sink1 [root@hadoop3 myBg]# cat apache-flume-1.8.0-bin/conf/flume.conf
[root@hadoop3 MyBgJavaLan]# java -classpath target/MyAid-1.0.0-jar-with-dependencies.jar  com.mycom.MyAppFlume
[DEBUG] 2018-07-16 17:09:00,072 method:org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:498)
Batch size string = 0
[WARN ] 2018-07-16 17:09:00,076 method:org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:504)
Invalid value for batchSize: 0; Using default value.
[WARN ] 2018-07-16 17:09:00,083 method:org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:634)
Using default maxIOWorkers
[DEBUG] 2018-07-16 17:09:00,129 method:org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:195)
Using Netty bootstrap options: {connectTimeoutMillis=20000, tcpNoDelay=true}
[DEBUG] 2018-07-16 17:09:00,130 method:org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:252)
Connecting to hadoop3/192.168.3.103:41414
[DEBUG] 2018-07-16 17:09:00,148 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0] OPEN
[DEBUG] 2018-07-16 17:09:00,206 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 => hadoop3/192.168.3.103:41414] BOUND: /192.168.3.103:36724
[DEBUG] 2018-07-16 17:09:00,206 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 => hadoop3/192.168.3.103:41414] CONNECTED: hadoop3/192.168.3.103:41414
[DEBUG] 2018-07-16 17:09:00,435 method:org.apache.avro.ipc.NettyTransceiver.disconnect(NettyTransceiver.java:314)
Disconnecting from hadoop3/192.168.3.103:41414
[DEBUG] 2018-07-16 17:09:00,436 method:org.apache.avro.ipc.NettyTransceiver.disconnect(NettyTransceiver.java:336)
Removing 1 pending request(s).
[DEBUG] 2018-07-16 17:09:00,438 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] DISCONNECTED
[DEBUG] 2018-07-16 17:09:00,439 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] UNBOUND
[DEBUG] 2018-07-16 17:09:00,440 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:491)
[id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] CLOSED
[DEBUG] 2018-07-16 17:09:00,440 method:org.apache.avro.ipc.NettyTransceiver$NettyClientAvroHandler.handleUpstream(NettyTransceiver.java:495)
Remote peer hadoop3/192.168.3.103:41414 closed connection.
[root@hadoop3 MyBgJavaLan]#

  

2018-07-16 17:08:21,448 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:127)] Checking file:conf/flume.conf for changes
2018-07-16 17:08:51,448 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:127)] Checking file:conf/flume.conf for changes
2018-07-16 17:09:00,202 (New I/O server boss #5) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 => /192.168.3.103:41414] OPEN
2018-07-16 17:09:00,203 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 => /192.168.3.103:41414] BOUND: /192.168.3.103:41414
2018-07-16 17:09:00,203 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 => /192.168.3.103:41414] CONNECTED: /192.168.3.103:36724
2018-07-16 17:09:00,391 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,391 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,405 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,406 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,407 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,407 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,408 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,409 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,410 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,410 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,411 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,412 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,413 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,413 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,415 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,415 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,417 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,417 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,418 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,419 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,420 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,420 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,422 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,422 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,424 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,424 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,425 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,425 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,426 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,427 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,428 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,428 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,429 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,430 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,431 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,431 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,432 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,432 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,434 (New I/O worker #2) [DEBUG - org.apache.flume.source.AvroSource.append(AvroSource.java:351)] Avro source avro-source1: Received avro event
2018-07-16 17:09:00,434 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 48 65 6C 6C 6F 20 46 6C 75 6D 65 21 Hello Flume! }
2018-07-16 17:09:00,437 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 :> /192.168.3.103:41414] DISCONNECTED
2018-07-16 17:09:00,439 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 :> /192.168.3.103:41414] UNBOUND
2018-07-16 17:09:00,439 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0x6e7b6074, /192.168.3.103:36724 :> /192.168.3.103:41414] CLOSED
2018-07-16 17:09:00,439 (New I/O worker #2) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.channelClosed(NettyServer.java:209)] Connection to /192.168.3.103:36724 disconnected.
2018-07-16 17:09:21,448 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:127)] Checking file:conf/flume.conf for changes
2018-07-16 17:09:51,449 (conf-file-poller-0) [DEBUG - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:127)] Checking file:conf/flume.conf for changes

  

[root@hadoop3 apache-flume-1.8.0-bin]# ll -as  /home/jLog/
总用量 8
0 drwxr-xr-x 2 root root 36 7月 16 17:09 .
4 drwxr-xr-x. 12 root root 4096 7月 15 15:40 ..
4 -rw-r--r-- 1 root root 1551 7月 16 17:09 D.log
0 -rw-r--r-- 1 root root 0 7月 16 17:09 error.log
[root@hadoop3 apache-flume-1.8.0-bin]# cat /home/jLog/error.log
[root@hadoop3 apache-flume-1.8.0-bin]# cat /home/jLog/D.log
2018-07-16 17:09:00 [ main:0 ] - [ DEBUG ] Batch size string = 0
2018-07-16 17:09:00 [ main:4 ] - [ WARN ] Invalid value for batchSize: 0; Using default value.
2018-07-16 17:09:00 [ main:11 ] - [ WARN ] Using default maxIOWorkers
2018-07-16 17:09:00 [ main:57 ] - [ DEBUG ] Using Netty bootstrap options: {connectTimeoutMillis=20000, tcpNoDelay=true}
2018-07-16 17:09:00 [ main:58 ] - [ DEBUG ] Connecting to hadoop3/192.168.3.103:41414
2018-07-16 17:09:00 [ main:76 ] - [ DEBUG ] [id: 0x5aeca4d0] OPEN
2018-07-16 17:09:00 [ New I/O worker #1:134 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 => hadoop3/192.168.3.103:41414] BOUND: /192.168.3.103:36724
2018-07-16 17:09:00 [ New I/O worker #1:134 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 => hadoop3/192.168.3.103:41414] CONNECTED: hadoop3/192.168.3.103:41414
2018-07-16 17:09:00 [ main:363 ] - [ DEBUG ] Disconnecting from hadoop3/192.168.3.103:41414
2018-07-16 17:09:00 [ main:364 ] - [ DEBUG ] Removing 1 pending request(s).
2018-07-16 17:09:00 [ New I/O worker #1:366 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] DISCONNECTED
2018-07-16 17:09:00 [ New I/O worker #1:367 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] UNBOUND
2018-07-16 17:09:00 [ New I/O worker #1:368 ] - [ DEBUG ] [id: 0x5aeca4d0, /192.168.3.103:36724 :> hadoop3/192.168.3.103:41414] CLOSED
2018-07-16 17:09:00 [ New I/O worker #1:368 ] - [ DEBUG ] Remote peer hadoop3/192.168.3.103:41414 closed connection.
[root@hadoop3 apache-flume-1.8.0-bin]#

  

log Configuration的更多相关文章

  1. 黄聪:Microsoft Enterprise Library 5.0 系列教程(十) Configuration Application Block

    原文:黄聪:Microsoft Enterprise Library 5.0 系列教程(十) Configuration Application Block 到目前为止,我们使用的模块都是在同一个配置 ...

  2. mysql报错Ignoring the redo log due to missing MLOG_CHECKPOINT between

    mysql报错Ignoring the redo log due to missing MLOG_CHECKPOINT between mysql版本:5.7.19 系统版本:centos7.3 由于 ...

  3. Jmeter-Maven-Plugin高级应用:Log Levels

    Log Levels Pages 12 Home Adding additional libraries to the classpath Advanced Configuration Basic C ...

  4. Apache Kafka源码分析 – Log Management

    LogManager LogManager会管理broker上所有的logs(在一个log目录下),一个topic的一个partition对应于一个log(一个log子目录)首先loadLogs会加载 ...

  5. 使用ganglia监控hadoop及hbase集群

    一.Ganglia简介 Ganglia 是 UC Berkeley 发起的一个开源监视项目,设计用于测量数以千计的节点.每台计算机都运行一个收集和发送度量数据(如处理器速度.内存使用量等)的名为 gm ...

  6. Tomcat基本入门知识及发布,虚拟访问及启动碰到的错误,虚拟目录,虚拟路径,各种Tomcat的配置

    Tomcat容器入门介绍 转自javaresearch.com由timgball 整理 Tomcat是一个免费的开源Web服务器,最新版本是5.5.1,支持Servlet2.4,JSP2.0,非常适合 ...

  7. kafka0.9.0及0.10.0配置属性

    问题导读1.borker包含哪些属性?2.Producer包含哪些属性?3.Consumer如何配置?borker(0.9.0及0.10.0)配置Kafka日志本身是由多个日志段组成(log segm ...

  8. ModSecurity web application firewall (WAF) Research

    catalog . 引言 . OWASP ModSecurity Core Rule Set (CRS) Project . Installation mod_security for Apache ...

  9. 使用pgstatspack分析PostgreSQL数据库性能

    pgstatspack [root@test01 soft]# wget http://pgfoundry.org/frs/download.php/3151/pgstatspack_version_ ...

随机推荐

  1. mysqlbinlog备份和mysqldump备份

    -bash : mysqldump: command not found -bash : mysqlbinlog:command not found 首先得知道mysql命令或mysqldump命令的 ...

  2. BZOJ 3450 Tyvj1952 Easy ——期望DP

    维护$x$和$x^2$的期望递推即可 #include <map> #include <ctime> #include <cmath> #include <q ...

  3. BZOJ 1095 [ZJOI2007]Hide 捉迷藏 ——动态点分治

    [题目分析] 这题好基啊. 先把分治树搞出来.然后每个节点两个堆. 第一个堆保存这个块里的所有点(即分治树中的所有儿子)到分治树上的父亲的距离. 第二个堆保存分治树子树中所有儿子第一个堆的最大值. 建 ...

  4. W3 School学习网站

    http://www.w3school.com.cn/ 领先的 Web 技术教程 - 全部免费 在 w3school,你可以找到你所需要的所有的网站建设教程. 从基础的 HTML 到 CSS,乃至进阶 ...

  5. JMS API学习总结(一)

    三.JMS API简析 顶级接口 P2P Pub/sub 备注 ConnectionFactory QueueConnectionFactory TopicConnectionFactory 基于工厂 ...

  6. MATLAB(1)

    前言 之前经常用MATLAB,却不小心停留在了舒适区,连基本的调试方法都没有掌握.本文主要是对MATLAB程序调试中的一般方法进行总结,也是自己学习的记录.全文大致分为三个段落: 1)代码内调试: 2 ...

  7. 【HDOJ6351】Beautiful Now(贪心,搜索)

    题意:给定一个数字n,最多可以交换其两个数位k次,求交换后的最大值与最小值,最小值不能有前导0 n,k<=1e9 思路: 当k>=n的位数时只需要无脑排序 k<n时有一个显然的贪心是 ...

  8. java面

    常被问到的十个 Java 面试题 每周 10 道 Java 面试题 : 面向对象, 类加载器, JDBC, Spring 基础概念 Java 面试题问与答:编译时与运行时 java面试基础1 java ...

  9. Redis集群模式配置

    redis集群部署安装: https://blog.csdn.net/huwh_/article/details/79242625 https://www.cnblogs.com/mafly/p/re ...

  10. ftrace用法

    ftrace官方文档在kernel/Documentation/trace/ftrace.txt文件中. 使用ftrace接口之前,如果系统没有自动挂载debugfs文件系统,则要先手动挂载. # m ...