Metrics 主要有五大基本组件
1:Counter
  记录执行次数
2:Gauge
  获取某个值
3:Meter
  用来计算事件的速率
4:Histogram
  可以为数据流提供统计数据。 除了最大值,最小值,平均值外,它还可以测量 中值(median),百分比比如XX%这样的Quantile数据
5:Timer
  用来测量一段代码被调用的速率和用时。等于Meter+Hitogram,既算TPS,也算执行时间。

这篇博文主要实现:

1.这五种指标的基本实现,写到控制台

2.指标实现队列长度监控

3.指标写入csv,jmx,jdbc

1.

基础类:

package com.newland.learning;

import com.codahale.metrics.MetricRegistry;

import java.util.concurrent.TimeUnit;

/**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
public class Base { protected static MetricRegistry metric = Constants.REGISTER;
protected static void secondSleep(long value)
{
try
{
TimeUnit.SECONDS.sleep(value);
} catch (InterruptedException e)
{
e.printStackTrace();
}
}
protected static void milliSecondSleep(long value)
{
try
{
TimeUnit.MILLISECONDS.sleep(value);
} catch (InterruptedException e)
{
e.printStackTrace();
}
}
}

核心类:MetricRegistry是常量

package com.newland.learning;

import com.codahale.metrics.MetricRegistry;

/**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
public class Constants { public static MetricRegistry REGISTER = new MetricRegistry(); }

先写控制台输出:

package com.newland.learning;

import com.codahale.metrics.ConsoleReporter;

import java.util.concurrent.TimeUnit;

/**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
public class ConsoleReport {
public static void startReport()
{
final ConsoleReporter reporter = ConsoleReporter.forRegistry(Constants.REGISTER)
.convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.SECONDS)
.build();
//一秒钟执行一次
reporter.start(1, TimeUnit.SECONDS);
}
}

gauge:

package com.newland.learning;

import com.codahale.metrics.Gauge;

/**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
/**
* 获取某个值,瞬时
*/
public class GaugeTest extends Base {
public static void main(String[] args)
{
ConsoleReport.startReport();
metric.register("com.learn.gauge.freeMemory", new Gauge<Long>(){
@Override
public Long getValue() {
//这里是获取当前JVM可用内存
return Runtime.getRuntime().freeMemory();
}
});
secondSleep(2);
}
}

结果:

18-1-28 19:18:54 ===============================================================

-- Gauges ----------------------------------------------------------------------
com.learn.gauge.freeMemory
value = 90862672 18-1-28 19:18:55 =============================================================== -- Gauges ----------------------------------------------------------------------
com.learn.gauge.freeMemory
value = 90862672

counter:

package com.newland.learning;

import com.codahale.metrics.Counter;

import java.util.Random;

/**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/ /**
* 记录执行次数
*/
public class CounterTest extends Base{
final static Counter exec = metric.counter("com.learn.counter.invoke");
public static void main(String[] args)
{
ConsoleReport.startReport();
new Thread(()->{
for(int i=1;i<=3;i++)
{
exec.inc();
milliSecondSleep(new Random().nextInt(500)*2);
}
}).start();
secondSleep(3);
} }

结果:

18-1-28 19:19:34 ===============================================================

-- Counters --------------------------------------------------------------------
com.learn.counter.invoke
count = 2 18-1-28 19:19:35 =============================================================== -- Counters --------------------------------------------------------------------
com.learn.counter.invoke
count = 3 18-1-28 19:19:36 =============================================================== -- Counters --------------------------------------------------------------------
com.learn.counter.invoke
count = 3

Meter:

package com.newland.learning;

import com.codahale.metrics.Meter;

import java.util.Random;

/**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
/**
* Meter用来计算事件的速率
*/
public class MeterTest extends Base { static final Meter requests = metric.meter("com.learn.meter.invoke");
public static void main(String[] args)
{
ConsoleReport.startReport();
new Thread(()->{
for(int i=1;i<=2;i++)
{
requests.mark();
milliSecondSleep(new Random().nextInt(500)*2);
}
}).start();
secondSleep(2);
}
}

结果:

18-1-28 19:20:52 ===============================================================

-- Meters ----------------------------------------------------------------------
com.learn.meter.invoke
count = 2
mean rate = 1.69 events/second
1-minute rate = 0.00 events/second
5-minute rate = 0.00 events/second
15-minute rate = 0.00 events/second 18-1-28 19:20:53 =============================================================== -- Meters ----------------------------------------------------------------------
com.learn.meter.invoke
count = 2
mean rate = 0.93 events/second
1-minute rate = 0.00 events/second
5-minute rate = 0.00 events/second
15-minute rate = 0.00 events/second

Histogram:

package com.newland.learning;

import com.codahale.metrics.Histogram;

import java.util.Arrays;
import java.util.List;
import java.util.Random; /**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
/**
* Histogram可以为数据流提供统计数据。 除了最大值,最小值,平均值外,它还可以测量 中值(median),
* 百分比比如XX%这样的Quantile数据
*/
public class HistogramTest extends Base{
static final Histogram his = metric.histogram("com.learn.histogram.score");
static List<Integer> scores = Arrays.asList(60, 75, 80, 62, 90, 42, 33, 95, 61, 73);
public static void main(String[] args)
{
ConsoleReport.startReport();
new Thread(()->{
scores.forEach( (score) -> {
his.update(score);
milliSecondSleep(new Random().nextInt(500)*2);
});
}).start();
secondSleep(10);
}
}

结果:

18-1-28 19:21:36 ===============================================================

-- Histograms ------------------------------------------------------------------
com.learn.histogram.score
count = 10
min = 33
max = 95
mean = 67.08
stddev = 18.68
median = 62.00
75% <= 80.00
95% <= 95.00
98% <= 95.00
99% <= 95.00
99.9% <= 95.00 18-1-28 19:21:37 =============================================================== -- Histograms ------------------------------------------------------------------
com.learn.histogram.score
count = 10
min = 33
max = 95
mean = 67.08
stddev = 18.68
median = 62.00
75% <= 80.00
95% <= 95.00
98% <= 95.00
99% <= 95.00
99.9% <= 95.00 18-1-28 19:21:38 =============================================================== -- Histograms ------------------------------------------------------------------
com.learn.histogram.score
count = 10
min = 33
max = 95
mean = 67.08
stddev = 18.68
median = 62.00
75% <= 80.00
95% <= 95.00
98% <= 95.00
99% <= 95.00
99.9% <= 95.00 18-1-28 19:21:39 =============================================================== -- Histograms ------------------------------------------------------------------
com.learn.histogram.score
count = 10
min = 33
max = 95
mean = 67.08
stddev = 18.68
median = 62.00
75% <= 80.00
95% <= 95.00
98% <= 95.00
99% <= 95.00
99.9% <= 95.00

Timer:

package com.newland.learning;

import com.codahale.metrics.Timer;

/**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
/**
* Timer用来测量一段代码被调用的速率和用时。
* 等于Meter+Hitogram,既算TPS,也算执行时间。
*/
public class TimerTest extends Base
{
static final Timer timer = metric.timer("com.learn.timer.invoke");
static void inovke(long time)
{
final Timer.Context context = timer.time();
try
{
secondSleep(time);
}finally
{
context.stop();
}
}
public static void main(String[] args)
{
ConsoleReport.startReport();
inovke(1);
inovke(2);
inovke(2);
inovke(8);
secondSleep(1);
}
}

结果:

18-1-28 19:22:28 ===============================================================

-- Timers ----------------------------------------------------------------------
com.learn.timer.invoke
count = 3
mean rate = 0.23 calls/second
1-minute rate = 0.38 calls/second
5-minute rate = 0.40 calls/second
15-minute rate = 0.40 calls/second
min = 1.00 seconds
max = 2.00 seconds
mean = 1.68 seconds
stddev = 0.47 seconds
median = 2.00 seconds
75% <= 2.00 seconds
95% <= 2.00 seconds
98% <= 2.00 seconds
99% <= 2.00 seconds
99.9% <= 2.00 seconds 18-1-28 19:22:29 =============================================================== -- Timers ----------------------------------------------------------------------
com.learn.timer.invoke
count = 4
mean rate = 0.28 calls/second
1-minute rate = 0.38 calls/second
5-minute rate = 0.40 calls/second
15-minute rate = 0.40 calls/second
min = 1.00 seconds
max = 8.00 seconds
mean = 3.44 seconds
stddev = 2.86 seconds
median = 2.00 seconds
75% <= 8.00 seconds
95% <= 8.00 seconds
98% <= 8.00 seconds
99% <= 8.00 seconds
99.9% <= 8.00 seconds

2.Counter获取队列长度

package com.newland.learning;

import com.codahale.metrics.Counter;

import java.util.Queue;
import java.util.Random;
import java.util.concurrent.LinkedBlockingQueue; /**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
public class QueueSize extends Base{ final static Counter exec = metric.counter("com.learn.counter2.invoke"); public static Queue<String> q = new LinkedBlockingQueue<String>(); public static Random random = new Random(); public static void addJob(String job) {
exec.inc();
q.offer(job);
} public static String takeJob() {
exec.dec();
return q.poll();
} public static void main(String[] args) throws InterruptedException {
ConsoleReport.startReport();
int num = 1;
while(true){
Thread.sleep(200);
if (random.nextDouble() > 0.7){
String job = takeJob();
System.out.println("take job : "+job);
}else{
String job = "Job-"+num;
addJob(job);
System.out.println("add job : "+job);
}
num++;
}
}
}

结果:

take job : null
add job : Job-2
take job : Job-2
add job : Job-4
add job : Job-5
18-1-28 19:23:12 =============================================================== -- Counters --------------------------------------------------------------------
com.learn.counter2.invoke
count = 1 add job : Job-6
add job : Job-7
take job : Job-4
add job : Job-9
18-1-28 19:23:13 =============================================================== -- Counters --------------------------------------------------------------------
com.learn.counter2.invoke
count = 3 add job : Job-10
take job : Job-5
add job : Job-12
add job : Job-13
add job : Job-14

3.写入csv,jmx,jdbc

package com.newland.learning;

import com.codahale.metrics.CsvReporter;
import com.codahale.metrics.JmxReporter; import javax.sql.DataSource;
import java.io.File;
import java.util.concurrent.TimeUnit; /**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
public class MetricReport { private JmxReporter jmxReporter;
private JdbcReporter jdbcReporter;
private CsvReporter csvReporter; public void startJdbcReport() {
//获取jdbc...
DataSource dataSource = null;
String source = "test_db"; jdbcReporter = JdbcReporter.forRegistry(Constants.REGISTER)
.convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS)
.isDelOldData(true)
.build(source, dataSource);
jdbcReporter.start(5, TimeUnit.SECONDS);
}
public void startCsvReport(){
//路径
File fileDir = new File("");
if (!fileDir.exists()) {
fileDir.mkdirs();
}
csvReporter = CsvReporter.forRegistry(Constants.REGISTER)
.convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS)
.build(fileDir);
csvReporter.start(5, TimeUnit.SECONDS);
} public void startJmxReport(){
jmxReporter = JmxReporter.forRegistry(Constants.REGISTER).build();
jmxReporter.start();
}
}

写入jdbc,需要自己实现继承:

package com.newland.learning;

import com.codahale.metrics.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Map;
import java.util.SortedMap;
import java.util.concurrent.TimeUnit; /**
* Created by garfield on 2018/1/28.
* Project test
* Package com.newland.learning
*/
public class JdbcReporter extends ScheduledReporter { /**
* Returns a new {@link Builder} for {@link JdbcReporter}.
*
* @param registry the registry to report
* @return a {@link Builder} instance for a {@link JdbcReporter}
*/
public static Builder forRegistry(MetricRegistry registry) {
return new Builder(registry);
} /**
* A builder for {@link JdbcReporter} instances. Defaults to converting rates to events/second, converting durations
* to milliseconds, and not filtering metrics.
*/
public static class Builder {
private final MetricRegistry registry;
private TimeUnit rateUnit;
private TimeUnit durationUnit;
private Clock clock;
private MetricFilter filter;
private TimeUnit timestampUnit;
private boolean isDelOldData; private Builder(MetricRegistry registry) {
this.registry = registry;
this.rateUnit = TimeUnit.SECONDS;
this.durationUnit = TimeUnit.MILLISECONDS;
this.clock = Clock.defaultClock();
this.filter = MetricFilter.ALL;
this.timestampUnit = TimeUnit.SECONDS;
this.isDelOldData = true;
} /**
* Convert rates to the given time unit.
*
* @param rateUnit a unit of time
* @return {@code this}
*/
public Builder convertRatesTo(TimeUnit rateUnit) {
this.rateUnit = rateUnit;
return this;
} /**
* Convert durations to the given time unit.
*
* @param durationUnit a unit of time
* @return {@code this}
*/
public Builder convertDurationsTo(TimeUnit durationUnit) {
this.durationUnit = durationUnit;
return this;
} /**
* Use the given {@link Clock} instance for the time.
*
* @param clock a {@link Clock} instance
* @return {@code this}
*/
public Builder withClock(Clock clock) {
this.clock = clock;
return this;
} /**
* Only report metrics which match the given filter.
*
* @param filter a {@link MetricFilter}
* @return {@code this}
*/
public Builder filter(MetricFilter filter) {
this.filter = filter;
return this;
} /**
* Convert reporting timestamp to the given time unit.
*
* @param timestampUnit a unit of time
* @return {@code this}
*/
public Builder convertTimestampTo(TimeUnit timestampUnit) {
this.timestampUnit = timestampUnit;
return this;
} public Builder isDelOldData(boolean isDelOldData) {
this.isDelOldData = isDelOldData;
return this;
} /**
* Builds a {@link JdbcReporter} with the given properties to report metrics to a database
*
* @param source A value to identify the source of each metrics in database
* @param dataSource The {@link DataSource}, which will be used to store the data from each metric
* @return a {@link JdbcReporter}
*/
public JdbcReporter build(String source, DataSource dataSource) {
return new JdbcReporter(registry, source, dataSource, rateUnit, durationUnit, timestampUnit, clock, filter, isDelOldData);
}
} private static final Logger logger = LoggerFactory.getLogger(JdbcReporter.class); private final Clock clock;
private final String source;
private final DataSource dataSource;
private final TimeUnit timestampUnit;
private final boolean isDelOldData; private static final String INSERT_GAUGE_QUERY =
"INSERT INTO METRIC_GAUGE (SOURCE, TIMESTAMP, NAME, VALUE) VALUES (?,?,?,?)";
private static final String INSERT_COUNTER_QUERY =
"INSERT INTO METRIC_COUNTER (SOURCE, TIMESTAMP, NAME, COUNT) VALUES (?,?,?,?)";
private static final String INSERT_METER_QUERY =
"INSERT INTO METRIC_METER (SOURCE,TIMESTAMP,NAME,COUNT,MEAN_RATE,M1_RATE,M5_RATE,M15_RATE,RATE_UNIT) "
+ "VALUES (?,?,?,?,?,?,?,?,?)";
private static final String INSERT_HISTOGRAM_QUERY =
"INSERT INTO METRIC_HISTOGRAM (SOURCE,TIMESTAMP,NAME,COUNT,MAX,MEAN,MIN,STDDEV,P50,P75,P95,P98,P99,P999) "
+ "VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
private static final String INSERT_TIMER_QUERY =
"INSERT INTO METRIC_TIMER (SOURCE,TIMESTAMP,NAME,COUNT,MAX,MEAN,MIN,STDDEV,P50,P75,P95,P98,P99,P999,"
+ "MEAN_RATE,M1_RATE,M5_RATE,M15_RATE,RATE_UNIT,DURATION_UNIT) "
+ "VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"; private static final String DELETE_GAUGE_QUERY =
"DELETE FROM METRIC_GAUGE WHERE SOURCE=? AND NAME=?";
private static final String DELETE_COUNTER_QUERY =
"DELETE FROM METRIC_COUNTER WHERE SOURCE=? AND NAME=?";
private static final String DELETE_METER_QUERY =
"DELETE FROM METRIC_METER WHERE SOURCE=? AND NAME=?";
private static final String DELETE_HISTOGRAM_QUERY =
"DELETE FROM METRIC_HISTOGRAM WHERE SOURCE=? AND NAME=?";
private static final String DELETE_TIMER_QUERY =
"DELETE FROM METRIC_TIMER WHERE SOURCE=? AND NAME=?"; private static final String DELETE_ALL_GAUGE_QUERY =
"DELETE FROM METRIC_GAUGE WHERE SOURCE=?";
private static final String DELETE_ALL_COUNTER_QUERY =
"DELETE FROM METRIC_COUNTER WHERE SOURCE=?";
private static final String DELETE_ALL_METER_QUERY =
"DELETE FROM METRIC_METER WHERE SOURCE=?";
private static final String DELETE_ALL_HISTOGRAM_QUERY =
"DELETE FROM METRIC_HISTOGRAM WHERE SOURCE=?";
private static final String DELETE_ALL_TIMER_QUERY =
"DELETE FROM METRIC_TIMER WHERE SOURCE=?"; private JdbcReporter(MetricRegistry registry, String source, DataSource dataSource, TimeUnit rateUnit,
TimeUnit durationUnit, TimeUnit timestampUnit, Clock clock, MetricFilter filter,
boolean isDelOldData) {
super(registry, "jdbc-reporter", filter, rateUnit, durationUnit);
this.source = source;
this.dataSource = dataSource;
this.timestampUnit = timestampUnit;
this.clock = clock;
if (source == null || source.trim().isEmpty()) {
throw new IllegalArgumentException("Source cannot be null or empty");
}
if (dataSource == null) {
throw new IllegalArgumentException("Data source cannot be null");
}
this.isDelOldData = isDelOldData;
} @Override
public void start(long period, TimeUnit unit) {
if (isDelOldData) {
delAllMetric();
}
super.start(period, unit);
} @SuppressWarnings("rawtypes")
@Override
public void report(SortedMap<String, Gauge> gauges, SortedMap<String, Counter> counters,
SortedMap<String, Histogram> histograms, SortedMap<String, Meter> meters,
SortedMap<String, Timer> timers) {
final long timestamp = timestampUnit.convert(clock.getTime(), TimeUnit.MILLISECONDS); if (!gauges.isEmpty()) {
reportGauges(timestamp, gauges);
}
if (!counters.isEmpty()) {
reportCounters(timestamp, counters);
}
if (!histograms.isEmpty()) {
reportHistograms(timestamp, histograms);
}
if (!meters.isEmpty()) {
reportMeters(timestamp, meters);
}
if (!timers.isEmpty()) {
reportTimers(timestamp, timers);
}
} @Override
protected String getRateUnit() {
return super.getRateUnit();
}
private void delAllMetric() {
Connection connection = null;
PreparedStatement ps = null;
try {
connection = dataSource.getConnection();
connection.setAutoCommit(true);
ps = connection.prepareStatement(DELETE_ALL_COUNTER_QUERY);
ps.setString(1, source);
ps.execute();
ps.close(); ps = connection.prepareStatement(DELETE_ALL_GAUGE_QUERY);
ps.setString(1, source);
ps.execute();
ps.close(); ps = connection.prepareStatement(DELETE_ALL_METER_QUERY);
ps.setString(1, source);
ps.execute();
ps.close(); ps = connection.prepareStatement(DELETE_ALL_HISTOGRAM_QUERY);
ps.setString(1, source);
ps.execute();
ps.close(); ps = connection.prepareStatement(DELETE_ALL_TIMER_QUERY);
ps.setString(1, source);
ps.execute();
ps.close(); ps = null;
connection.close();
connection = null;
} catch (SQLException e) {
rollbackTransaction(connection);
logger.error("Error when delAllMetric", e);
} finally {
closeQuietly(connection, ps, null);
}
}
private void delMetric(PreparedStatement ps, String name) throws SQLException {
ps.setString(1, source);
ps.setString(2, name);
}
@SuppressWarnings("rawtypes")
private void reportGauges(final long timestamp, final SortedMap<String, Gauge> gauges) {
Connection connection = null;
PreparedStatement ps = null;
PreparedStatement dps = null;
try {
connection = dataSource.getConnection();
connection.setAutoCommit(false);
ps = connection.prepareStatement(INSERT_GAUGE_QUERY);
dps = connection.prepareStatement(DELETE_GAUGE_QUERY);
for (Map.Entry<String, Gauge> entry : gauges.entrySet()) {
String name = entry.getKey();
Gauge gauge = entry.getValue();
reportGauge(timestamp, ps, name, gauge);
ps.addBatch(); delMetric(dps, name);
dps.addBatch();
} if (isDelOldData) {
dps.executeBatch();
} ps.executeBatch();
connection.commit();
dps.close();
dps = null;
ps.close();
ps = null;
connection.close();
connection = null;
} catch (SQLException e) {
rollbackTransaction(connection);
logger.error("Error when reporting gauges", e);
} finally {
closeQuietly(connection, ps, dps);
}
} @SuppressWarnings("rawtypes")
private void reportGauge(final long timestamp, PreparedStatement ps, String name, Gauge gauge) throws SQLException {
ps.setString(1, source);
ps.setLong(2, timestamp);
ps.setString(3, name);
ps.setObject(4, gauge.getValue());
} private void reportCounters(final long timestamp, final SortedMap<String, Counter> counters) {
Connection connection = null;
PreparedStatement ps = null;
PreparedStatement dps = null;
try {
connection = dataSource.getConnection();
connection.setAutoCommit(false);
ps = connection.prepareStatement(INSERT_COUNTER_QUERY);
dps = connection.prepareStatement(DELETE_COUNTER_QUERY);
for (Map.Entry<String, Counter> entry : counters.entrySet()) {
String name = entry.getKey();
Counter counter = entry.getValue();
reportCounter(timestamp, ps, name, counter);
ps.addBatch(); delMetric(dps, name);
dps.addBatch();
} if (isDelOldData) {
dps.executeBatch();
} ps.executeBatch();
connection.commit();
dps.close();
dps = null;
ps.close();
ps = null;
connection.close();
connection = null;
} catch (SQLException e) {
rollbackTransaction(connection);
logger.error("Error when reporting counters", e);
} finally {
closeQuietly(connection, ps, dps);
}
} private void reportCounter(final long timestamp, PreparedStatement ps, String name, Counter counter)
throws SQLException {
ps.setString(1, source);
ps.setLong(2, timestamp);
ps.setString(3, name);
ps.setLong(4, counter.getCount());
} private void reportHistograms(final long timestamp, final SortedMap<String, Histogram> histograms) {
Connection connection = null;
PreparedStatement ps = null;
PreparedStatement dps = null;
try {
connection = dataSource.getConnection();
connection.setAutoCommit(false);
ps = connection.prepareStatement(INSERT_HISTOGRAM_QUERY);
dps = connection.prepareStatement(DELETE_HISTOGRAM_QUERY); for (Map.Entry<String, Histogram> entry : histograms.entrySet()) {
String name = entry.getKey();
Histogram histogram = entry.getValue();
reportHistogram(timestamp, ps, name, histogram);
ps.addBatch(); delMetric(dps, name);
dps.addBatch();
} if (isDelOldData) {
dps.executeBatch();
} ps.executeBatch();
connection.commit();
dps.close();
dps = null;
ps.close();
ps = null;
connection.close();
connection = null;
} catch (SQLException e) {
rollbackTransaction(connection);
logger.error("Error when reporting histograms", e);
} finally {
closeQuietly(connection, ps, dps);
}
} private void reportHistogram(final long timestamp, PreparedStatement ps, String name, Histogram histogram)
throws SQLException {
final Snapshot snapshot = histogram.getSnapshot(); ps.setString(1, source);
ps.setLong(2, timestamp);
ps.setString(3, name);
ps.setLong(4, histogram.getCount());
ps.setDouble(5, snapshot.getMax());
ps.setDouble(6, snapshot.getMean());
ps.setDouble(7, snapshot.getMin());
ps.setDouble(8, snapshot.getStdDev());
ps.setDouble(9, snapshot.getMedian());
ps.setDouble(10, snapshot.get75thPercentile());
ps.setDouble(11, snapshot.get95thPercentile());
ps.setDouble(12, snapshot.get98thPercentile());
ps.setDouble(13, snapshot.get99thPercentile());
ps.setDouble(14, snapshot.get999thPercentile());
} private void reportMeters(final long timestamp, final SortedMap<String, Meter> meters) {
Connection connection = null;
PreparedStatement ps = null;
PreparedStatement dps = null;
try {
connection = dataSource.getConnection();
connection.setAutoCommit(false);
ps = connection.prepareStatement(INSERT_METER_QUERY);
dps = connection.prepareStatement(DELETE_METER_QUERY); for (Map.Entry<String, Meter> entry : meters.entrySet()) {
String name = entry.getKey();
Meter meter = entry.getValue();
reportMeter(timestamp, ps, name, meter);
ps.addBatch(); delMetric(dps, name);
dps.addBatch();
} if (isDelOldData) {
dps.executeBatch();
} ps.executeBatch();
connection.commit();
dps.close();
dps = null;
ps.close();
ps = null;
connection.close();
connection = null;
} catch (SQLException e) {
rollbackTransaction(connection);
logger.error("Error when reporting meters", e);
} finally {
closeQuietly(connection, ps, dps);
}
} private void reportMeter(final long timestamp, PreparedStatement ps, String name, Meter meter) throws SQLException {
ps.setString(1, source);
ps.setLong(2, timestamp);
ps.setString(3, name);
ps.setLong(4, meter.getCount());
ps.setDouble(5, convertRate(meter.getMeanRate()));
ps.setDouble(6, convertRate(meter.getOneMinuteRate()));
ps.setDouble(7, convertRate(meter.getFiveMinuteRate()));
ps.setDouble(8, convertRate(meter.getFifteenMinuteRate()));
ps.setString(9, String.format("events/%s", getRateUnit()));
} private void reportTimers(final long timestamp, final SortedMap<String, Timer> timers) {
Connection connection = null;
PreparedStatement ps = null;
PreparedStatement dps = null;
try {
connection = dataSource.getConnection();
connection.setAutoCommit(false);
ps = connection.prepareStatement(INSERT_TIMER_QUERY);
dps = connection.prepareStatement(DELETE_TIMER_QUERY); for (Map.Entry<String, Timer> entry : timers.entrySet()) {
String name = entry.getKey();
Timer timer = entry.getValue();
reportTimer(timestamp, ps, name, timer);
ps.addBatch(); delMetric(dps, name);
dps.addBatch();
} if (isDelOldData) {
dps.executeBatch();
} ps.executeBatch();
connection.commit();
dps.close();
dps = null;
ps.close();
ps = null;
connection.close();
connection = null;
} catch (SQLException e) {
rollbackTransaction(connection);
logger.error("Error when reporting timers", e);
} finally {
closeQuietly(connection, ps, dps);
}
} private void reportTimer(final long timestamp, PreparedStatement ps, String name, Timer timer) throws SQLException {
final Snapshot snapshot = timer.getSnapshot(); ps.setString(1, source);
ps.setLong(2, timestamp);
ps.setString(3, name);
ps.setLong(4, timer.getCount());
ps.setDouble(5, convertDuration(snapshot.getMax()));
ps.setDouble(6, convertDuration(snapshot.getMean()));
ps.setDouble(7, convertDuration(snapshot.getMin()));
ps.setDouble(8, convertDuration(snapshot.getStdDev()));
ps.setDouble(9, convertDuration(snapshot.getMedian()));
ps.setDouble(10, convertDuration(snapshot.get75thPercentile()));
ps.setDouble(11, convertDuration(snapshot.get95thPercentile()));
ps.setDouble(12, convertDuration(snapshot.get98thPercentile()));
ps.setDouble(13, convertDuration(snapshot.get99thPercentile()));
ps.setDouble(14, convertDuration(snapshot.get999thPercentile()));
ps.setDouble(15, convertRate(timer.getMeanRate()));
ps.setDouble(16, convertRate(timer.getOneMinuteRate()));
ps.setDouble(17, convertRate(timer.getFiveMinuteRate()));
ps.setDouble(18, convertRate(timer.getFifteenMinuteRate()));
ps.setString(19, String.format("calls/%s", getRateUnit()));
ps.setString(20, getDurationUnit());
} private void rollbackTransaction(Connection connection) {
if (connection != null) {
try {
connection.rollback();
} catch (SQLException e) {
if (logger.isWarnEnabled()) {
logger.warn("Error when rolling back the transaction", e);
}
}
}
} private void closeQuietly(Connection connection, PreparedStatement ps, PreparedStatement dps) {
if (ps != null) {
try {
ps.close();
} catch (SQLException e) {
// Ignore
}
}
if (dps != null) {
try {
dps.close();
} catch (SQLException e) {
// Ignore
}
}
if (connection != null) {
try {
connection.close();
} catch (SQLException e) {
// Ignore
}
}
}
}

done

学习:blog.csdn.net/tracymkgld/article/details/51899721

  http://blog.csdn.net/wzygis/article/details/52789105

metrics 开发监控实现jdbc的更多相关文章

  1. Python 全栈开发 -- 监控篇

    如果你已经玩转了 Python 编程语言语法,肯定想用这些知识,开发一款应用程序,它可以是在网上,可以炫耀或出售,那就需要全栈式开发 Python.具体如何创建,部署和运行生产 Python Web ...

  2. 启用k8s metrics server监控

    1.创建aggregator证书 方法一:直接使用二进制源码包安装 $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ chmod +x cfs ...

  3. 使用prometheus抓取k8s的metrics作监控时,cAdvisor和kubelet配置有何差别?

    按网上说法: 目前cAdvisor集成到了kubelet组件内,可以在kubernetes集群中每个启动了kubelet的节点使用cAdvisor提供的metrics接口获取该节点所有容器相关的性能指 ...

  4. JMeter--PerfMon Metrics Collector监控内存及CPU

    1.需要准备的软件及插件 ServerAgent-2.2.1.zip JMeterPlugins-Standard-1.3.1.zip 2.jmeter上JMeterPlugins-Standard- ...

  5. 微服务探索之路04篇k8s增加子节点,metrics资源监控,ingress-nginx域名配置及https配置

    1 k8s增加子节点 1.1 子节点服务器安装docker,使用脚本自动安装 curl -fsSL https://get.docker.com | bash -s docker --mirror A ...

  6. Java数据库开发(一)之——JDBC连接数据库

    一.MySQL数据库 1.创建数据库 CREATE DATABASE jdbc CHARACTER SET 'utf8'; 2.建表 CREATE TABLE user ( id int(10) NO ...

  7. (转)开发监控Linux 内存 Shell 脚本

    原文:http://blog.csdn.net/timchen525/article/details/76474017 题场景: 开发Shell 脚本判断系统剩余内存的大小,如果低于100MB,就邮件 ...

  8. 利用javascript调用摄像头,可以配合socket开发监控系统

    <html> <head> <meta http-equiv="content-type" content="text/html; char ...

  9. AspNet Core 下利用普罗米修斯+Grafana构建Metrics和服务器性能的监控 (无心打造文字不喜勿喷谢谢!)

    概述 Prometheus的主要特点 组件 结构图 适用场景 不适用场景 安装node_exporter,系统性能指数收集(收集系统性能情况) 下载文件 解压并复制node_exporter应用程序到 ...

随机推荐

  1. 学习Git---20分钟git快速上手

    学习Git-----20分钟git快速上手  在Git如日中天的今天,不懂git都不好意思跟人说自己是程序猿.你是不是早就跃跃欲试了,只是苦于没有借口(契机). 好吧,机会就在今天. 给我20分钟,是 ...

  2. SQL Server中判断字符串出现的位置及字符串截取

    首先建一张测试表: )); insert into teststring values ('张三,李四,王五,马六,萧十一,皇宫'); 1.判断字符串中某字符(字符串)出现的次数,第一次出现的位置最后 ...

  3. IOS7 新特性

    相关ios7新特性 帖子.挺全的.一定要看看哪 http://www.devdiv.com/iOS_iPhone-ios_ui_uikit_text_kit_-thread-203631-1-1.ht ...

  4. 2.1 Apache Axis2 快速学习手册之 POJO 构建Web Service

    1. 准备:创建一个Maven Web App 项目 这里让我们使用Maven 模板创建一个Web App 项目 1. New------> Maven Project 2. 使用默认配置,点击 ...

  5. Memory Leak检測神器--LeakCanary初探

      在之前的文章Android内存泄露的几种情形中提到过在开发中常见的内存泄露问题,可是过于草率.因为刚开年,工作还没正式展开,就看了一下Github开源大户Square的LeakCanary,并用公 ...

  6. hdu 2680 Choose the best route (dijkstra算法)

    题目:http://acm.hdu.edu.cn/showproblem.php?pid=2680 /************************************************* ...

  7. 2-7-集合运算(A-B)∪(B-A)-线性表-第2章-《数据结构》课本源码-严蔚敏吴伟民版

    课本源码部分 第2章  线性表 - 集合运算(A-B)∪(B-A) ——<数据结构>-严蔚敏.吴伟民版        ★有疑问先阅读★ 源码使用说明  链接☛☛☛ <数据结构-C语言 ...

  8. hot-warm-architecture-in-elasticsearch-5-x

    https://www.elastic.co/blog/hot-warm-architecture-in-elasticsearch-5-x https://www.elastic.co/blog/e ...

  9. Market Guide for AIOps Platforms

    AIOps platforms enhance IT operations through greater insights by combining big data, machine learni ...

  10. React 设计思想

    https://github.com/react-guide/react-basic React 设计思想 译者序:本文是 React 核心开发者.有 React API 终结者之称的 Sebasti ...