SparkSQL配置和使用初探
1.环境
- OS:Red Hat Enterprise Linux Server release 6.4 (Santiago)
- Hadoop:Hadoop 2.4.1
- Hive:0.11.0
- JDK:1.7.0_60
- Spark:1.1.0(内置SparkSQL)
- Scala:2.11.2
2.Spark集群规划
- 账户:ebupt
- master:eb174
- slaves:eb174、eb175、eb176
3.SparkSQL发展历史
2014年9月11日,发布Spark1.1.0。Spark从1.0开始引入SparkSQL。Spark1.1.0变化较大是SparkSQL和MLlib。具体参见release note。
SparkSQL的前身是Shark。由于Shark自身的不完善,2014年6月1日Reynold Xin宣布:停止对Shark的开发。SparkSQL抛弃原有Shark的代码,汲取了Shark的一些优点,如内存列存储(In-Memory Columnar Storage)、Hive兼容性等,重新开发SparkSQL。
4.配置
- 安装配置同Spark-0.9.1(参见博文:Spark、Shark集群安装部署及遇到的问题解决)
- 将$HIVE_HOME/conf/hive-site.xml配置文件拷贝到$SPARK_HOME/conf目录下。
- 将$HADOOP_HOME/etc/hadoop/hdfs-site.xml配置文件拷贝到$SPARK_HOME/conf目录下。
5.运行
- 启动Spark集群
- 启动SparkSQL Client:./spark/bin/spark-sql --master spark://eb174:7077 --executor-memory 3g
- 运行SQL,访问hive的表:spark-sql> select count(*) from test.t1;
14/10/08 20:46:04 INFO ParseDriver: Parsing command: select count(*) from test.t1
14/10/08 20:46:05 INFO ParseDriver: Parse Completed
14/10/08 20:46:05 INFO metastore: Trying to connect to metastore with URI thrift://eb170:9083
14/10/08 20:46:05 INFO metastore: Waiting 1 seconds before next connection attempt.
14/10/08 20:46:06 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@eb174:55408/user/Executor#1282322316] with ID 2
14/10/08 20:46:06 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@eb176:56138/user/Executor#-264112470] with ID 0
14/10/08 20:46:06 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@eb175:43791/user/Executor#-996481867] with ID 1
14/10/08 20:46:06 INFO BlockManagerMasterActor: Registering block manager eb174:54967 with 265.4 MB RAM
14/10/08 20:46:06 INFO BlockManagerMasterActor: Registering block manager eb176:60783 with 265.4 MB RAM
14/10/08 20:46:06 INFO BlockManagerMasterActor: Registering block manager eb175:35197 with 265.4 MB RAM
14/10/08 20:46:06 INFO metastore: Connected to metastore.
14/10/08 20:46:07 INFO deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/10/08 20:46:07 INFO MemoryStore: ensureFreeSpace(406982) called with curMem=0, maxMem=278302556
14/10/08 20:46:07 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 397.4 KB, free 265.0 MB)
14/10/08 20:46:07 INFO MemoryStore: ensureFreeSpace(25198) called with curMem=406982, maxMem=278302556
14/10/08 20:46:07 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 24.6 KB, free 265.0 MB)
14/10/08 20:46:07 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on eb174:49971 (size: 24.6 KB, free: 265.4 MB)
14/10/08 20:46:07 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
14/10/08 20:46:07 INFO SparkContext: Starting job: collect at HiveContext.scala:415
14/10/08 20:46:08 INFO FileInputFormat: Total input paths to process : 1
14/10/08 20:46:08 INFO DAGScheduler: Registering RDD 5 (mapPartitions at Exchange.scala:86)
14/10/08 20:46:08 INFO DAGScheduler: Got job 0 (collect at HiveContext.scala:415) with 1 output partitions (allowLocal=false)
14/10/08 20:46:08 INFO DAGScheduler: Final stage: Stage 0(collect at HiveContext.scala:415)
14/10/08 20:46:08 INFO DAGScheduler: Parents of final stage: List(Stage 1)
14/10/08 20:46:08 INFO DAGScheduler: Missing parents: List(Stage 1)
14/10/08 20:46:08 INFO DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[5] at mapPartitions at Exchange.scala:86), which has no missing parents
14/10/08 20:46:08 INFO MemoryStore: ensureFreeSpace(11000) called with curMem=432180, maxMem=278302556
14/10/08 20:46:08 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 10.7 KB, free 265.0 MB)
14/10/08 20:46:08 INFO MemoryStore: ensureFreeSpace(5567) called with curMem=443180, maxMem=278302556
14/10/08 20:46:08 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 5.4 KB, free 265.0 MB)
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on eb174:49971 (size: 5.4 KB, free: 265.4 MB)
14/10/08 20:46:08 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
14/10/08 20:46:08 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (MapPartitionsRDD[5] at mapPartitions at Exchange.scala:86)
14/10/08 20:46:08 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
14/10/08 20:46:08 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 0, eb174, NODE_LOCAL, 1199 bytes)
14/10/08 20:46:08 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 1, eb176, NODE_LOCAL, 1199 bytes)
14/10/08 20:46:08 INFO ConnectionManager: Accepted connection from [eb176/10.1.69.176:49289]
14/10/08 20:46:08 INFO ConnectionManager: Accepted connection from [eb174/10.1.69.174:33401]
14/10/08 20:46:08 INFO SendingConnection: Initiating connection to [eb176/10.1.69.176:60783]
14/10/08 20:46:08 INFO SendingConnection: Initiating connection to [eb174/10.1.69.174:54967]
14/10/08 20:46:08 INFO SendingConnection: Connected to [eb176/10.1.69.176:60783], 1 messages pending
14/10/08 20:46:08 INFO SendingConnection: Connected to [eb174/10.1.69.174:54967], 1 messages pending
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on eb176:60783 (size: 5.4 KB, free: 265.4 MB)
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on eb174:54967 (size: 5.4 KB, free: 265.4 MB)
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on eb174:54967 (size: 24.6 KB, free: 265.4 MB)
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on eb176:60783 (size: 24.6 KB, free: 265.4 MB)
14/10/08 20:46:10 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 1) in 2657 ms on eb176 (1/2)
14/10/08 20:46:10 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 0) in 2675 ms on eb174 (2/2)
14/10/08 20:46:10 INFO DAGScheduler: Stage 1 (mapPartitions at Exchange.scala:86) finished in 2.680 s
14/10/08 20:46:10 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
14/10/08 20:46:10 INFO DAGScheduler: looking for newly runnable stages
14/10/08 20:46:10 INFO DAGScheduler: running: Set()
14/10/08 20:46:10 INFO DAGScheduler: waiting: Set(Stage 0)
14/10/08 20:46:10 INFO DAGScheduler: failed: Set()
14/10/08 20:46:10 INFO DAGScheduler: Missing parents for Stage 0: List()
14/10/08 20:46:10 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[9] at map at HiveContext.scala:360), which is now runnable
14/10/08 20:46:10 INFO MemoryStore: ensureFreeSpace(9752) called with curMem=448747, maxMem=278302556
14/10/08 20:46:10 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 9.5 KB, free 265.0 MB)
14/10/08 20:46:10 INFO MemoryStore: ensureFreeSpace(4941) called with curMem=458499, maxMem=278302556
14/10/08 20:46:10 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 4.8 KB, free 265.0 MB)
14/10/08 20:46:10 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on eb174:49971 (size: 4.8 KB, free: 265.4 MB)
14/10/08 20:46:10 INFO BlockManagerMaster: Updated info of block broadcast_2_piece0
14/10/08 20:46:11 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (MappedRDD[9] at map at HiveContext.scala:360)
14/10/08 20:46:11 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
14/10/08 20:46:11 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 2, eb175, PROCESS_LOCAL, 948 bytes)
14/10/08 20:46:11 INFO StatsReportListener: Finished stage: org.apache.spark.scheduler.StageInfo@513f39c
14/10/08 20:46:11 INFO StatsReportListener: task runtime:(count: 2, mean: 2666.000000, stdev: 9.000000, max: 2675.000000, min: 2657.000000)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s
14/10/08 20:46:11 INFO StatsReportListener: shuffle bytes written:(count: 2, mean: 50.000000, stdev: 0.000000, max: 50.000000, min: 50.000000)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B
14/10/08 20:46:11 INFO StatsReportListener: task result size:(count: 2, mean: 1848.000000, stdev: 0.000000, max: 1848.000000, min: 1848.000000)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B
14/10/08 20:46:11 INFO StatsReportListener: executor (non-fetch) time pct: (count: 2, mean: 86.309428, stdev: 0.103820, max: 86.413248, min: 86.205607)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 86 % 86 % 86 % 86 % 86 % 86 % 86 % 86 % 86 %
14/10/08 20:46:11 INFO StatsReportListener: other time pct: (count: 2, mean: 13.690572, stdev: 0.103820, max: 13.794393, min: 13.586752)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 14 % 14 % 14 % 14 % 14 % 14 % 14 % 14 % 14 %
14/10/08 20:46:11 INFO ConnectionManager: Accepted connection from [eb175/10.1.69.175:36187]
14/10/08 20:46:11 INFO SendingConnection: Initiating connection to [eb175/10.1.69.175:35197]
14/10/08 20:46:11 INFO SendingConnection: Connected to [eb175/10.1.69.175:35197], 1 messages pending
14/10/08 20:46:11 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on eb175:35197 (size: 4.8 KB, free: 265.4 MB)
14/10/08 20:46:12 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 0 to sparkExecutor@eb175:58085
14/10/08 20:46:12 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 140 bytes
14/10/08 20:46:12 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 2) in 1428 ms on eb175 (1/1)
14/10/08 20:46:12 INFO DAGScheduler: Stage 0 (collect at HiveContext.scala:415) finished in 1.432 s
14/10/08 20:46:12 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
14/10/08 20:46:12 INFO StatsReportListener: Finished stage: org.apache.spark.scheduler.StageInfo@6e8030b0
14/10/08 20:46:12 INFO StatsReportListener: task runtime:(count: 1, mean: 1428.000000, stdev: 0.000000, max: 1428.000000, min: 1428.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s
14/10/08 20:46:12 INFO StatsReportListener: fetch wait time:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms
14/10/08 20:46:12 INFO StatsReportListener: remote bytes read:(count: 1, mean: 100.000000, stdev: 0.000000, max: 100.000000, min: 100.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B
14/10/08 20:46:12 INFO SparkContext: Job finished: collect at HiveContext.scala:415, took 4.787407158 s
14/10/08 20:46:12 INFO StatsReportListener: task result size:(count: 1, mean: 1072.000000, stdev: 0.000000, max: 1072.000000, min: 1072.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B
14/10/08 20:46:12 INFO StatsReportListener: executor (non-fetch) time pct: (count: 1, mean: 80.252101, stdev: 0.000000, max: 80.252101, min: 80.252101)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 80 % 80 % 80 % 80 % 80 % 80 % 80 % 80 % 80 %
14/10/08 20:46:12 INFO StatsReportListener: fetch wait time pct: (count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 %
14/10/08 20:46:12 INFO StatsReportListener: other time pct: (count: 1, mean: 19.747899, stdev: 0.000000, max: 19.747899, min: 19.747899)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 20 % 20 % 20 % 20 % 20 % 20 % 20 % 20 % 20 %
5078
Time taken: 7.581 seconds
注意:
- 在启动spark-sql时,如果不指定master,则以local的方式运行,master既可以指定standalone的地址,也可以指定yarn;
- 当设定master为yarn时(spark-sql --master yarn)时,可以通过http://$master:8088页面监控到整个job的执行过程;
- 如果在$SPARK_HOME/conf/spark-defaults.conf中配置了spark.master spark://eb174:7077,那么在启动spark-sql时不指定master也是运行在standalone集群之上。
6.遇到的问题及解决方案
① 在spark-sql客户端命令行界面运行SQL语句出现无法解析UnknownHostException:ebcloud(这是hadoop的dfs.nameservices)
14/10/08 20:42:44 ERROR CliDriver: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, eb174): java.lang.IllegalArgumentException: java.net.UnknownHostException: ebcloud
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:144)
org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)
org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
原因:Spark无法正确获取HDFS的地址。因此,将hadoop的HDFS配置文件hdfs-site.xml拷贝到$SPARK_HOME/conf目录下。
②
14/10/08 20:26:46 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
14/10/08 20:26:46 INFO SparkContext: Starting job: collect at HiveContext.scala:415
14/10/08 20:29:19 WARN RetryInvocationHandler: Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo over eb171/10.1.69.171:8020. Not retrying because failovers (15) exceeded maximum allowed (15)
java.net.ConnectException: Call From eb174/10.1.69.174 to eb171:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
原因:hdfs连接失败,原因是hdfs-site.xml未全部同步到slaves的节点上。
7.参考资料
- sparkSQL1.1入门之六:sparkSQL之基础应用
- Spark SQL CLI描述
- local class incompatible serialVersionUID
- Cannot Submit Tez Application
SparkSQL配置和使用初探的更多相关文章
- rabbitmq实践笔记(一):安装、配置与使用初探
引言: 对于一个大型的软件系统来说,会有很多的组件.模块及不同的子系统一起协同工作,模块之间的通信需要一个可靠的通信管道来保证 ,通信管道需要解决解决很多问题,比如: 1)信息的发送者和接收者如何维持 ...
- Zeppelin0.6.2之shiro安全配置 初探
0.序 默认情况下,Zeppelin安装好并且配置完zeppelin-site.xml和zeppelin-env.sh后,我们进入的模式,从右上角就能看出来是anonymous模式,这种模式下会看见所 ...
- WCF初探-9:WCF服务承载 (下)
在WCF初探-8:WCF服务承载 (上)中,我们对宿主的概念.环境.特点做了文字性的介绍和概括,接下来我们将通过实例对这几种寄宿方式进行介绍.为了更好的说明各寄宿环境特点,本实例采用Http和net. ...
- SparkSql官方文档中文翻译(java版本)
1 概述(Overview) 2 DataFrames 2.1 入口:SQLContext(Starting Point: SQLContext) 2.2 创建DataFrames(Creating ...
- 【Spark深入学习 -16】官网学习SparkSQL
----本节内容-------1.概览 1.1 Spark SQL 1.2 DatSets和DataFrame2.动手干活 2.1 契入点:SparkSess ...
- Spark记录-SparkSql官方文档中文翻译(部分转载)
1 概述(Overview) Spark SQL是Spark的一个组件,用于结构化数据的计算.Spark SQL提供了一个称为DataFrames的编程抽象,DataFrames可以充当分布式SQL查 ...
- 【大数据】SparkSql学习笔记
第1章 Spark SQL概述 1.1 什么是Spark SQL Spark SQL是Spark用来处理结构化数据的一个模块,它提供了2个编程抽象:DataFrame和 DataSet,并且作为分布式 ...
- SparkSQL执行时参数优化
近期接手了不少大数据表任务调度补数据的工作,补数时发现资源消耗异常的大且运行速度却不怎么给力. 发现根本原因在于sparkSQL配置有诸多问题,解决后总结出来就当抛砖引玉了. 具体现象 内存CPU比例 ...
- Spark SQL 官方文档-中文翻译
Spark SQL 官方文档-中文翻译 Spark版本:Spark 1.5.2 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 Data ...
随机推荐
- Google Map API v2 (四)----- 导航路径
仍然是建议个异步小任务 private GetPathTask mGetPathTask = null; private void getGuidePath(LatLng origin){ if(mG ...
- CSS背景颜色、背景图片、平铺、定位、固定
CSS背景颜色设置 background-color:red;如设置背景颜色为红色: 背景颜色设置支持3种写法: 颜色名 16进制 rgb CSS背景图片颜色设置 background-image:u ...
- DATABASE LINK 的查看、创建与删除
1.查看dblink SELECT OWNER,OBJECT_NAME FROM DBA_OBJECTS WHERE OBJECT_TYPE='DATABASE LINK'; 或者 SELECT * ...
- 使用 text-overflow: ellipsis溢出文本显示省略号时碰到的小问题
本人刚刚实习,第一次写东西,希望大家多多鼓励. 项目中需要实现标题超过一定长度以省略号的形式显示,不是什么难的问题.可是我不想用js实现,就百度了发现text-overflow: ellipsis;( ...
- dynamic 和var
dynamic,编译后被转换成带有 dynamicAttribute的object对象,可用在方法参数,返回值活或者局部变量上 执行过程: 运行时绑定首先会检查是否继承IDynamicMetaObje ...
- 关于打开ILDASM的方法
1.通过VisualStudio在开始菜单下的Microsoft Visual Studio 2008/Visual Studio Tools/中的命令提示符中输入ildasm即可 2.将其添加至 ...
- jQuery实现多级手风琴树形下拉菜单(源码)
前几天因为公司的菜单要调整,公司的UI框架是不支持的,所以就自己在网上找了一个下拉菜单,可以支持多级菜单数据的,菜单数据是从xml文件中配置后读取的,网上有许多这方面的例子感觉不是很好用,就打了个包贴 ...
- oracle 消除块竞争(hot blocks)
上篇日志提到了,那么高的负载,是存在数据块读竞争,下面介绍几个方法来消除块竟争 查找块竟争 SELECT p1 "file#", p2 "block#", p3 ...
- 【html】【4】html事件集合
必看参考: http://www.runoob.com/tags/ref-eventattributes.html http://www.cnblogs.com/jiangchongwei/archi ...
- boost::bind实践
第一部分源码为基础实践: /*Beyond the C++ Standard Library ( An Introduction to Boost )[CN].chm*/ /*bind的用法*/ #i ...