1, BufferSize_machine

1), template

主要用来监控buffersize的状态的

name: 模块名字, 用于后续调取使用,

label: 模块显示名字, 在页面显示的

includeAll: 是否包含 all 按钮

query: 查询的sql语句, 由于模版一致, 所以后续只保留sql

classify:

  1. select distinct(classify) from host_dict

model_name

  1. select distinct(model_name) from host_dict where classify in ($classify)

2) KafkaSinkNetword

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "BufferSize"
  10. AND object = 'KafkaSinkNetwork'
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time ASC, host

3), KafkaSinkFile

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "BufferSize"
  10. AND object = 'KafkaSinkFile'
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time ASC, host

4), FileSink

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "BufferSize"
  10. AND object = 'FileSink'
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time ASC, host

5), MessageCopy

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "BufferSize"
  10. AND object = 'MessageCopy'
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time ASC, host

2, BufferSizeTopic

1) BufferSize长度

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. sum(value) as value,
  4. object as metric
  5. FROM jmx_status
  6. WHERE $__timeFilter(time)
  7. AND attribute = 'BufferSize'
  8. GROUP BY object, time
  9. ORDER BY time ASC, metric

3, Metric_machine

1), template

Classify

  1. select distinct(classify) from host_dict

model_name

  1. select distinct(model_name) from host_dict where classify in ($classify)

source_file

  1. select distinct(component) from topic_count where component like '%Source'

kafka_file

  1. select distinct(component) from topic_count where component like 'Kafka%';

2), 接受消息总量

  1. SELECT
  2. UNIX_TIMESTAMP(tc.time) as time_sec,
  3. tc.host as metric,
  4. SUM(out_num) as value
  5. FROM topic_count as tc
  6. left join topic_dict as td
  7. on tc.topic = td.topic
  8. left join host_dict as hd
  9. on hd.innet_ip = tc.host
  10. WHERE $__timeFilter(time)
  11. AND tc.component in ($source_file)
  12. AND hd.classify in ($classify)
  13. and hd.model_name in ($model_name)
  14. GROUP BY tc.host, tc.time
  15. ORDER BY tc.time ASC, tc.host

3), 发送kafka消息总量

  1. SELECT
  2. UNIX_TIMESTAMP(tc.time) as time_sec,
  3. tc.host as metric,
  4. SUM(in_num - out_num) as value
  5. FROM topic_count as tc
  6. left join topic_dict as td
  7. on tc.topic = td.topic
  8. left join host_dict as hd
  9. on hd.innet_ip = tc.host
  10. WHERE $__timeFilter(time)
  11. AND tc.component in ($kafka_file)
  12. AND hd.classify in ($classify)
  13. and hd.model_name in ($model_name)
  14. GROUP BY tc.host, tc.time
  15. ORDER BY tc.time ASC, tc.host

4), 发送kafka消息失败量

  1. SELECT
  2. UNIX_TIMESTAMP(tc.time) as time_sec,
  3. tc.host as metric,
  4. SUM(out_num) as value
  5. FROM topic_count as tc
  6. left join host_dict as hd
  7. on hd.innet_ip = tc.host
  8. WHERE $__timeFilter(time)
  9. AND (tc.component = 'KafkaSinkNetwork' OR tc.component = 'KafkaSinkFile')
  10. AND hd.classify in ($classify)
  11. and hd.model_name in ($model_name)
  12. GROUP BY tc.host, tc.time
  13. ORDER BY tc.time ASC, tc.host

4, Metric_topic

1), template

component_name

  1. select distinct(component_name) from topic_dict

topic

  1. select distinct(topic) from topic_dict where component_name in ($component_name)

source_file

  1. select distinct(component) from topic_count where component like '%Source'

kafka_file

  1. select distinct(component) from topic_count where component like 'Kafka%';

2), iris接收总量

  1. SELECT
  2. UNIX_TIMESTAMP(tc.time) as time_sec,
  3. tc.topic as metric,
  4. SUM(out_num) as value
  5. FROM topic_count as tc
  6. left join topic_dict as td
  7. on tc.topic = td.topic
  8. WHERE $__timeFilter(time)
  9. AND tc.topic in ($topic)
  10. AND tc.component in ($source_file)
  11. GROUP BY tc.topic, tc.time
  12. ORDER BY tc.time ASC, tc.topic

3), 发送kafka消息总量

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. tc.topic as metric,
  4. SUM(in_num - out_num) as value
  5. FROM topic_count as tc
  6. left join topic_dict as td
  7. on tc.topic = td.topic
  8. WHERE $__timeFilter(time)
  9. AND tc.topic in ($topic)
  10. AND tc.component in ($kafka_file)
  11. GROUP BY tc.topic, tc.time
  12. ORDER BY tc.time ASC, tc.topic

4), 发送kafka消息失败量

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. sum(out_num) as value,
  4. topic as metric
  5. FROM topic_count
  6. WHERE $__timeFilter(time)
  7. AND (component = 'KafkaSinkNetwork' OR component = 'KafkaSinkFile')
  8. AND topic in ($topic)
  9. GROUP BY topic, time
  10. ORDER BY time ASC, metric

5), 消息丢失数

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. (temp.value - SUM(tc.in_num - tc.out_num)) as value,
  4. tc.topic as metric
  5. FROM topic_count as tc
  6. left join topic_dict as td
  7. on tc.topic = td.topic
  8. right join (
  9. SELECT
  10. tc2.time as calen,
  11. tc2.topic,
  12. SUM(out_num) as value
  13. FROM topic_count as tc2
  14. left join topic_dict as td2
  15. on tc2.topic = td2.topic
  16. WHERE $__timeFilter(tc2.time)
  17. AND tc2.topic in ($topic)
  18. AND tc2.component in ($source_file)
  19. AND tc2.component <> 'FileSource'
  20. GROUP BY tc2.topic, tc2.time
  21. ) as temp
  22. on temp.calen = tc.time
  23. and temp.topic = tc.topic
  24. WHERE $__timeFilter(time)
  25. AND tc.topic in ($topic)
  26. AND tc.component in ($kafka_file)
  27. GROUP BY tc.time, tc.topic
  28. ORDER BY tc.topic, tc.time asc

6), 验平汇总, 此为表格

  1. SELECT
  2. date_format(time, '%Y-%m-%d %H:%i:%s') as time,
  3. tc.topic,
  4. temp.value as iris总接受量,
  5. SUM(tc.in_num - tc.out_num) as kafka发送成功,
  6. SUM(tc.out_num) as kafka发送失败,
  7. (temp.value - SUM(tc.in_num - tc.out_num)) as 消息丢失数
  8. FROM topic_count as tc
  9. left join topic_dict as td
  10. on tc.topic = td.topic
  11. right join (
  12. SELECT
  13. tc2.time as calen,
  14. tc2.topic,
  15. SUM(out_num) as value
  16. FROM topic_count as tc2
  17. left join topic_dict as td2
  18. on tc2.topic = td2.topic
  19. WHERE $__timeFilter(tc2.time)
  20. AND tc2.topic in ($topic)
  21. AND tc2.component in ($source_file)
  22. AND tc2.component <> 'FileSource'
  23. GROUP BY tc2.topic, tc2.time
  24. ) as temp
  25. on temp.calen = tc.time
  26. and temp.topic = tc.topic
  27. WHERE $__timeFilter(time)
  28. AND tc.topic in ($topic)
  29. AND tc.component in ($kafka_file)
  30. GROUP BY tc.time, tc.topic
  31. ORDER BY tc.topic, tc.time asc

5, QPS_Component

qps

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. sum(value) as value,
  4. object as metric
  5. FROM jmx_status
  6. WHERE $__timeFilter(time)
  7. AND attribute = 'QPS'
  8. GROUP BY object, time
  9. ORDER BY time ASC, metric

6, QPS_machine

1), template

classify

  1. select distinct(classify) from host_dict

model_name

  1. select distinct(model_name) from host_dict where classify in ($classify)

2), topic_source

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "QPS"
  10. AND object = 'TcpSource'
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time DESC, metric
  14. limit

3), js_source

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "QPS"
  10. AND object = 'JsSource'
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time DESC
  14. limit

4), legency_source

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE object = 'LegacyJsSource'
  9. AND attribute = "QPS"
  10. AND $__timeFilter(time)
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time DESC
  14. limit

5), webSource

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE object = 'WebSource'
  9. AND attribute = "QPS"
  10. AND $__timeFilter(time)
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time DESC
  14. limit

6), zhixinSource

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. concat('KSF-', host) as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE object = 'ZhixinSource'
  9. AND attribute = "QPS"
  10. AND $__timeFilter(time)
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time DESC
  14. limit

7) cdn_httpsource

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE object = 'CdnHttpSource'
  9. AND attribute = "QPS"
  10. AND $__timeFilter(time)
  11. AND hd.classify in ($classify)
  12. and hd.model_name in ($model_name)
  13. ORDER BY time DESC
  14. limit

8), qps_everyhost

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. sum(value) as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "QPS"
  10. AND hd.classify in ($classify)
  11. and hd.model_name in ($model_name)
  12. group by host, time
  13. ORDER BY time DESC, metric
  14. limit

9), qps_hostnum

  1. SELECT
  2. UNIX_TIMESTAMP(js.time) as time_sec,
  3. count(distinct(js.host)) as value,
  4. hd.classify as metric
  5. FROM jmx_status as js
  6. left join host_dict as hd
  7. on js.host = hd.innet_ip
  8. WHERE $__timeFilter(time)
  9. AND attribute = "QPS"
  10. group by time, hd.classify
  11. ORDER BY time DESC, metric
  12. limit

7, Resource_machine

1), template

classify

  1. select distinct(classify) from host_dict

model_name

  1. select distinct(model_name) from host_dict where classify in ($classify)

2), cpu_used

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "SystemCpuLoad"
  10. AND hd.classify in ($classify)
  11. and hd.model_name in ($model_name)
  12. ORDER BY time ASC, metric

3), memory_used

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "HeapMemoryUsage.used"
  10. AND hd.classify in ($classify)
  11. and hd.model_name in ($model_name)
  12. ORDER BY time ASC, metric

4), thread_count

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "ThreadCount"
  10. AND hd.classify in ($classify)
  11. and hd.model_name in ($model_name)
  12. ORDER BY time ASC, metric

5), openfile_script

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "OpenFileDescriptorCount"
  10. AND hd.classify in ($classify)
  11. and hd.model_name in ($model_name)
  12. ORDER BY time ASC, metric

8, alter 报警用, 需要配合设置

iris第一次尝试入kafka失败百分比

  1. SELECT
  2. UNIX_TIMESTAMP(time) as time_sec,
  3. value as value,
  4. host as metric
  5. FROM jmx_status as jmx
  6. left join host_dict as hd
  7. on hd.innet_ip = jmx.host
  8. WHERE $__timeFilter(time)
  9. AND attribute = "OpenFileDescriptorCount"
  10. AND hd.classify in ($classify)
  11. and hd.model_name in ($model_name)
  12. ORDER BY time ASC, metric

主要在alter标签中

然后在alter标签中进行配置

9, 需要用到的sql保存

1), host_dict

  1. CREATE TABLE `host_dict` (
  2. `id` bigint() NOT NULL AUTO_INCREMENT COMMENT '主键id',
  3. `classify` varchar() DEFAULT NULL COMMENT '类型',
  4. `model_name` varchar() DEFAULT NULL COMMENT '模块名',
  5. `innet_ip` varchar() DEFAULT NULL COMMENT '内网ip',
  6. `outnet_ip` varchar() DEFAULT NULL COMMENT '外网ip',
  7. `cpu_core` int() DEFAULT NULL COMMENT 'cpu核心',
  8. `memory_size` int() DEFAULT NULL COMMENT '内存',
  9. `address` varchar() DEFAULT NULL COMMENT '机房',
  10. `status` varchar() DEFAULT NULL COMMENT '状态',
  11. `plan` varchar() DEFAULT NULL COMMENT '规划',
  12. PRIMARY KEY (`id`),
  13. KEY `classify` (`classify`,`model_name`,`innet_ip`),
  14. KEY `idx_innet_ip` (`innet_ip`) USING BTREE
  15. ) ENGINE=InnoDB AUTO_INCREMENT= DEFAULT CHARSET=utf8

2), jmx_status

  1. CREATE TABLE `jmx_status` (
  2. `host` varchar() NOT NULL DEFAULT '',
  3. `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  4. `report` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  5. `object` varchar() NOT NULL DEFAULT '',
  6. `attribute` varchar() NOT NULL DEFAULT '',
  7. `value` double DEFAULT NULL,
  8. PRIMARY KEY (`host`,`time`,`object`,`attribute`),
  9. KEY `idx_host_time_attribute_object` (`host`,`time`,`attribute`,`object`) USING BTREE,
  10. KEY `idx_time_attribute` (`time`,`attribute`) USING BTREE
  11. ) ENGINE=InnoDB DEFAULT CHARSET=utf8
  12. /*!50100 PARTITION BY RANGE (unix_timestamp(time))
  13. (PARTITION p20180811 VALUES LESS THAN (1534003199) ENGINE = InnoDB,
  14. PARTITION p20180812 VALUES LESS THAN (1534089599) ENGINE = InnoDB,
  15. PARTITION p20180813 VALUES LESS THAN (1534175999) ENGINE = InnoDB,
  16. PARTITION p20180814 VALUES LESS THAN (1534262399) ENGINE = InnoDB,
  17. PARTITION p20180815 VALUES LESS THAN (1534348799) ENGINE = InnoDB,
  18. PARTITION p20180816 VALUES LESS THAN (1534435199) ENGINE = InnoDB,
  19. PARTITION p20180817 VALUES LESS THAN (1534521599) ENGINE = InnoDB,
  20. PARTITION p20180818 VALUES LESS THAN (1534607999) ENGINE = InnoDB,
  21. PARTITION p20180819 VALUES LESS THAN (1534694399) ENGINE = InnoDB,
  22. PARTITION p20180820 VALUES LESS THAN (1534780799) ENGINE = InnoDB,
  23. PARTITION p20180821 VALUES LESS THAN (1534867199) ENGINE = InnoDB,
  24. PARTITION p20180822 VALUES LESS THAN (1534953599) ENGINE = InnoDB) */

3), topic_count

  1. CREATE TABLE `topic_count` (
  2. `host` varchar() NOT NULL DEFAULT '',
  3. `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  4. `report` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  5. `component` varchar() NOT NULL DEFAULT '',
  6. `topic` varchar() NOT NULL DEFAULT '',
  7. `in_num` bigint() DEFAULT NULL,
  8. `out_num` bigint() DEFAULT NULL,
  9. PRIMARY KEY (`host`,`time`,`topic`,`component`),
  10. KEY `component` (`component`,`topic`,`time`),
  11. KEY `idx_topic` (`topic`) USING BTREE,
  12. KEY `idx_time_topic` (`time`,`topic`) USING BTREE,
  13. KEY `idx_compnent` (`component`) USING BTREE
  14. ) ENGINE=InnoDB DEFAULT CHARSET=utf8

4), topic_dict

  1. CREATE TABLE `topic_dict` (
  2. `id` bigint() NOT NULL AUTO_INCREMENT COMMENT '主键id',
  3. `model` varchar() DEFAULT NULL COMMENT '模式',
  4. `component` varchar() DEFAULT NULL COMMENT '组件',
  5. `component_name` varchar() DEFAULT NULL COMMENT '组件名称',
  6. `component_type` varchar() DEFAULT NULL COMMENT '组件类型',
  7. `topic` varchar() DEFAULT NULL COMMENT 'topic',
  8. `topic_type` varchar() DEFAULT NULL COMMENT 'topic类型',
  9. `status` varchar() DEFAULT 'ON' COMMENT '状态',
  10. PRIMARY KEY (`id`),
  11. KEY `component` (`component`,`topic`),
  12. KEY `idx_topic` (`topic`) USING BTREE
  13. ) ENGINE=InnoDB AUTO_INCREMENT= DEFAULT CHARSET=utf8

1-监控界面sql保存的更多相关文章

  1. kafka-eagle监控界面搭建

    kafka-eagle监控界面搭建 一.背景 二 .mac上安装kafka-eagle 1.安装JDK 2.安装eagle 1.下载eagle 2.解压并配置环境变量 3.启用kafka的JMX 4. ...

  2. PLSQL_监控有些SQL的执行次数和频率

    原文:PLSQL_监控有些SQL的执行次数和频率 2014-12-25 Created By 鲍新建

  3. 【DB2】监控动态SQL语句

    一.db2监控动态SQL(快照监控) db2示例用户登陆后,使用脚本语句db2 get snapshot for all on dbname>snap.out 也可以使用db2 get snap ...

  4. SpringBoot2.0 基础案例(07):集成Druid连接池,配置监控界面

    一.Druid连接池 1.druid简介 Druid连接池是阿里巴巴开源的数据库连接池项目.Druid连接池为监控而生,内置强大的监控功能,监控特性不影响性能.功能强大,能防SQL注入,内置Login ...

  5. 通过本地Agent监控Azure sql database

    背景: 虽然Azure sql database有DMVs可以查看DTU等使用情况,但记录有时间限制,不会一直保留.为了更好监控Azure_sql_database上各个库的DTU使用情况.数据库磁盘 ...

  6. 六:SpringBoot-集成Druid连接池,配置监控界面

    SpringBoot-集成Druid连接池,配置监控界面 1.Druid连接池 1.1 Druid特点 2.SpringBoot整合Druid 2.1 引入核心依赖 2.2 数据源配置文件 2.3 核 ...

  7. C#使用Oxyplot绘制监控界面

    C#中可选的绘图工具有很多,除了Oxyplot还有DynamicDataDisplay(已经改名为InteractiveDataDisplay)等等.不过由于笔者这里存在一些环境上的特殊要求,.Net ...

  8. 微服务监控druid sql

    参考该文档 保存druid的监控记录 把日志保存的关系数据数据库(mysql,oracle等) 或者nosql数据库(redis,芒果db等) 保存的时候可以增加微服务名称标识好知道是哪个微服务的sq ...

  9. Unity 编辑器的 界面布局 保存方法

    在软件界面的右上角(关闭按钮的下方),点击  layout  (界面)的下拉箭头. 弹出选项中的 save layout....(保存界面选项),输入命名,就可以生成这个界面的布局.  (软件本身也有 ...

随机推荐

  1. 符合Chrome58的证书制作

    Chrome 58开始取消了对通用名检查的支持, 但网上大多数OpenSSL使用教程没有提及这一点, 制作出的证书总是提示ERR_CERT_COMMON_NAME_INVALID 错误, 所以分享出解 ...

  2. gdb调试技巧 找到php执行进程当前执行的代码

    假设线上有一段php脚本,突然在某天出问题了,不处理但是进程没有退出.这种情况可能是异常休眠或者是有段死循环代码,但是我们怎么定位呢,我们这个时候最想知道的应该是这个脚本在此刻在做什么吧.这个是gdb ...

  3. osg探究补充:Node::accept(NodeVisitor& nv)及NodeVisitor简介

    前言 在前几节中,我自己觉得讲的比较粗糙,因为实在是时间上不是很充足,今天我想弥补一下,希望不是亡羊补牢.我们在osgViewer::Viewer::eventTraversal()函数中经常看到这么 ...

  4. 获取当前最顶层的ViewController

    - (UIViewController *)topViewController { UIViewController *resultVC; resultVC = [self _topViewContr ...

  5. python中None与0、Null、false区别

    None是Python中的一个关键字,None本身也是个一个数据类型,而这个数据类型就是None,它可0.空字符串以及false均不一样,这些都只是对象,而None也是一个类. 给个bool测试: v ...

  6. 2019.02.21 bzoj1249: SGU277 HERO 动态凸包(set+凸包)

    传送门 题意:动态插入点,维护凸包面积. 思路:用setsetset维护极角序来支持面积查询即可. 然后注意选原点的时候要从初始三个点随机平均系数来避免精度误差. 代码: #include<bi ...

  7. 2019.02.17 spoj Query on a tree V(链分治)

    传送门 题意简述: 给你一棵nnn个黑白点的树,初始全是黑点. 现在支持给一个点换颜色或者求整颗树中离某个点最近的白点跟这个点的距离. 思路: 考虑链分治维护答案,每个链顶用一个堆来维护答案,然后对于 ...

  8. SWPU-ACM集训队周赛之组队赛(3-11) C题题解

    点这里去看题 模拟,注意细节 #include<stdio.h> #include<string.h> int main() { ]; //q[]储存正负信息 scanf(&q ...

  9. java_io

    JAVA IO流(一)参考文章:http://ifeve.com/java-io-network/,并发编程网原创文章,转载请注明: 转载自并发编程网 – ifeve.com本文链接地址: Java ...

  10. ESP-IDF版本更新说明(V2.1版)转自github(https://github.com/espressif/esp-idf/releases/)

    ESP-IDF Release v2.1  igrr 发布了这个 on 29 Jul · 自此发布以来,我承诺要 承诺414 自v2.0以来的变化. 突破变化 版本v2.1旨在大大兼容为ESP-IDF ...