1. z_activeagent
  2. z_weekstore
  3. z_wstest
  4. zz_monthstore
  5. row(s) in 0.5240 seconds
  6.  
  7. => ["KYLIN_02YJ3NJ7PW", "KYLIN_09YWHIEKLK", "KYLIN_0MGAS628J4", "KYLIN_14TNZQ6DAE", "KYLIN_18RFD2M9KD", "KYLIN_2XRK8PNLEQ", "KYLIN_3LTL19CGNB", "KYLIN_3ZKJLGTHF8", "KYLIN_42XI0TZP1C", "KYLIN_4E5NFSJ1TT", "KYLIN_58UKG45HEY", "KYLIN_5G0HVH6WO5", "KYLIN_5QPRJFEZBF", "KYLIN_6FQUGB2GG3", "KYLIN_7SUORPO9V7", "KYLIN_7U9AFHQ366", "KYLIN_8HZPENNGB7", "KYLIN_8QHHG5GOC2", "KYLIN_A6GK4REWOD", "KYLIN_B8DDOOO8IV", "KYLIN_DZ79IEFUEY", "KYLIN_ETYEUFI2WO", "KYLIN_FBIWHPCOHY", "KYLIN_FTW1CM9P5H", "KYLIN_G2NWQRQAFV", "KYLIN_G6F41QVAI6", "KYLIN_ICBULW0MPB", "KYLIN_JT60DBVXUI", "KYLIN_KQWB6I426I", "KYLIN_L4KS1QHSDH", "KYLIN_PI3B0F23NU", "KYLIN_PQOMA1EHZP", "KYLIN_QJGQJYRATQ", "KYLIN_QTIZGJEGBW", "KYLIN_S3IK6XW0SZ", "KYLIN_U6LWJPGXE5", "KYLIN_UBI758YA36", "KYLIN_UNN1IGQT4C", "KYLIN_VCA1XQU0JX", "KYLIN_VIM0C9L5WE", "KYLIN_YR4QE1XYAK", "KYLIN_YYIJGRXIBU", "KYLIN_Z4Y323QGUL", "KYLIN_ZF7D6S12IO", "KYLIN_ZU7XCILCF7", "addrent_info", "comm_info", "flushrent_info", "flushsale_info", "hot_info", "house:renthouse_test", "kylin_metadata", "kylin_metadata_acl", "kylin_metadata_user", "promotion_info", "rank_count_rent", "rank_count_sale", "salehousedeal", "sitehot_info", "stork_info", "storkrent_info", "storksale_info", "t_book", "test", "testinsert", "z_activeagent", "z_weekstore", "z_wstest", "zz_monthstore"]
  8. hbase(main)::> count 'z_activeagent'
  9.  
  10. ERROR: HRegionInfo was null in z_activeagent, row=keyvalues={z_activeagent,,.d36b716be958e98c9ae41bd4d7a46caa./info:seqnumDuringOpen//Put/vlen=/seqid=, z_activeagent,,.d36b716be958e98c9ae41bd4d7a46caa./info:server//Put/vlen=/seqid=, z_activeagent,,.d36b716be958e98c9ae41bd4d7a46caa./info:serverstartcode//Put/vlen=/seqid=}
  11.  
  12. Here is some help for this command:
  13. Count the number of rows in a table. Return value is the number of rows.
  14. This operation may take a LONG time (Run '$HADOOP_HOME/bin/hadoop jar
  15. hbase.jar rowcount' to run a counting mapreduce job). Current count is shown
  16. every rows by default. Count interval may be optionally specified. Scan
  17. caching is enabled on count scans by default. Default cache size is rows.
  18. If your rows are small in size, you may want to increase this
  19. parameter. Examples:

无论是scan 还是count 甚至是复制表都不行

  1. [root@master109 ~]# hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=z_activeagent1 z_activeagent
  2. SLF4J: Class path contains multiple SLF4J bindings.
  3. SLF4J: Found binding in [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  4. SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop2./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  6. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  7. -- ::, WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it.
  8. -- ::, INFO [main] Configuration.deprecation: dfs.permissions is deprecated. Instead, use dfs.permissions.enabled
  9. -- ::, INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
  10. -- ::, WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it.
  11. -- ::, INFO [main] Configuration.deprecation: dfs.permissions is deprecated. Instead, use dfs.permissions.enabled
  12. -- ::, INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
  13. -- ::, INFO [main] client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
  14. -- ::, INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x706a4369 connecting to ZooKeeper ensemble=master109:,spider1:,node110:,node111:,node112:
  15. -- ::, INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.-cdh5.7.0--, built on // : GMT
  16. -- ::, INFO [main] zookeeper.ZooKeeper: Client environment:host.name=master109
  17. -- ::, INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_80
  18. -- ::, INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
  19. -- ::, INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/opt/hadoop/jdk1./jre
  20. ...
  21. -- ::, INFO [main] mapreduce.Job: Task Id : attempt_1507608682095_49226_m_000001_0, Status : FAILED
  22. Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed actions: z_activeagent1: times,
  23. at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:)
  24. at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$(AsyncProcess.java:)
  25. at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:)
  26. at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:)
  27. at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:)
  28. at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:)
  29. at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:)
  30. at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:)
  31. at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:)
  32. at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:)
  33. at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:)
  34. at org.apache.hadoop.hbase.mapreduce.Import$Importer.processKV(Import.java:)
  35. at org.apache.hadoop.hbase.mapreduce.Import$Importer.writeResult(Import.java:)
  36. at org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:)
  37. at org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:)
  38. at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:)
  39. at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:)
  40. at org.apache.hadoop.mapred.MapTask.run(MapTask.java:)
  41. at org.apache.hadoop.mapred.YarnChild$.run(YarnChild.java:)
  42. at java.security.AccessController.doPrivileged(Native Method)
  43. at javax.security.auth.Subject.doAs(Subject.java:)
  44. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:)
  45. at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:)

无奈打开 hbck工具

  1. [root@master109 ~]# cd /opt/hadoop/hbase
    [root@master109 hbase]# ls
    bin conf hbase-annotations hbase-client hbase-external-blockcache hbase-it hbase-protocol hbase-server hbase-testing-util LEGAL logs README.txt
    CHANGES.txt dev-support hbase-assembly hbase-common hbase-hadoop2-compat hbase-prefix-tree hbase-resource-bundle hbase-shaded hbase-thrift lib NOTICE.txt src
    cloudera docs hbase-checkstyle hbase-examples hbase-hadoop-compat hbase-procedure hbase-rest hbase-shell hbase-webapps LICENSE.txt pom.xml
    [root@master109 hbase]# cd bin/
    [root@master109 bin]# ls
    draining_servers.rb hbase hbase-common.sh hbase-daemon.sh hirb.rb master-backup.sh region_status.rb shutdown_regionserver.rb stop-hbase.cmd thread-pool.rb
    get-active-master.rb hbase-cleanup.sh hbase-config.cmd hbase-daemons.sh local-master-backup.sh region_mover.rb replication start-hbase.cmd stop-hbase.sh zookeepers.sh
    graceful_stop.sh hbase.cmd hbase-config.sh hbase-jruby local-regionservers.sh regionservers.sh rolling-restart.sh start-hbase.sh test
    [root@master109 bin]# hbase hbck
    ...
    Table KYLIN_0MGAS628J4 is okay.
  2. Number of regions:
  3. Deployed on: node111,,
  4. Table KYLIN_QJGQJYRATQ is okay.
  5. Number of regions:
  6. Deployed on: node112,,
  7. Table zz_monthstore is okay.
  8. Number of regions:
  9. Deployed on: node110,, node111,,
  10. Table KYLIN_S3IK6XW0SZ is okay.
  11. Number of regions:
  12. Deployed on: node110,,
  13. Table flushsale_info is okay.
  14. Number of regions:
  15. Deployed on: node111,,
  16. Table stork_info is okay.
  17. Number of regions:
  18. Deployed on: node110,,
  19. Table storksale_info is okay.
  20. Number of regions:
  21. Deployed on: node111,,
  22. Table KYLIN_PI3B0F23NU is okay.
  23. Number of regions:
  24. Deployed on: node112,,
  25. Table z_activeagent is inconsistent.
  26. Number of regions: 12
  27. Deployed on: node110,60020,1514865702994 node111,60020,1514865706402 node112,60020,1514865710244
  28. Table rank_count_rent is okay.
  29. Number of regions:
  30. Deployed on: node111,,
  31. Table rank_count_sale is okay.
  32. Number of regions:
  33. Deployed on: node111,,
  34. Table KYLIN_ICBULW0MPB is okay.
  35. Number of regions:
  36. Deployed on: node112,,
  37. Table KYLIN_FTW1CM9P5H is okay.
  38. Number of regions:
  39. Deployed on: node112,,
  40. Table kylin_metadata is okay.
  41. Number of regions:
  42. Deployed on: node110,,
  43. Table kylin_metadata_user is okay.
  44. Number of regions:
  45. Deployed on: node112,,
  46. Table KYLIN_18RFD2M9KD is okay.
  47. Number of regions:
  48. Deployed on: node112,,
  49. Table KYLIN_5QPRJFEZBF is okay.
  50. Number of regions:
  51. Deployed on: node111,,
  52. Table KYLIN_09YWHIEKLK is okay.
  53. Number of regions:
  54. Deployed on: node110,,
  55. Table KYLIN_ETYEUFI2WO is okay.
  56. Number of regions:
  57. Deployed on: node111,,
  58. Table KYLIN_58UKG45HEY is okay.
  59. Number of regions:
  60. Deployed on: node111,,
  61. Table salehousedeal is okay.
  62. Number of regions:
  63. Deployed on: node110,, node111,, node112,,
  64. inconsistencies detected.
  65. Status: INCONSISTENT
  66. -- ::, INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
  67. -- ::, INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x560adb43048004b
  68. -- ::, INFO [main] zookeeper.ZooKeeper: Session: 0x560adb43048004b closed
  69. -- ::, INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down

发现一个 不一致,不一致的表就是我操作的表 z_activeagent

启动修复

  1. [root@master109 bin]# hbase hbck -fixMeta
  2. SLF4J: Class path contains multiple SLF4J bindings.
  3. SLF4J: Found binding in [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  4. SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop2./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  6. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  7. -- ::, INFO [main] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
  8. HBaseFsck command line options: -fixMeta
  9. -- ::, WARN [main] util.HBaseFsck: Got AccessDeniedException when preCheckPermission
  10. org.apache.hadoop.hbase.security.AccessDeniedException: Permission denied: action=WRITE path=hdfs://gagcluster/hbase/MasterProcWALs user=root
  11. at org.apache.hadoop.hbase.util.FSUtils.checkAccess(FSUtils.java:)
  12. at org.apache.hadoop.hbase.util.HBaseFsck.preCheckPermission(HBaseFsck.java:)
  13. at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:)
  14. at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:)
  15. at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
  16. at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
  17. at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:)
  18. Current user root does not have write perms to hdfs://gagcluster/hbase/MasterProcWALs. Please rerun hbck as hdfs user hadoop
  19. [root@master109 bin]# su hadoop
  20. [hadoop@master109 bin]$ hbase hbck
  21. SLF4J: Class path contains multiple SLF4J bindings.
  22. SLF4J: Found binding in [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  23. SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop2./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  24. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  25. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  26. -- ::, INFO [main] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
  27. ...
  28. Table z_activeagent is inconsistent.
  29. Number of regions:
  30. Deployed on: node110,, node111,, node112,,
  31. Table rank_count_rent is okay.
  32. Number of regions:
  33. Deployed on: node111,,
  34. Table rank_count_sale is okay.
  35. Number of regions:
  36. Deployed on: node111,,
  37. Table KYLIN_ICBULW0MPB is okay.
  38. Number of regions:
  39. Deployed on: node112,,
  40. Table KYLIN_FTW1CM9P5H is okay.
  41. Number of regions:
  42. Deployed on: node112,,
  43. Table kylin_metadata is okay.
  44. Number of regions:
  45. Deployed on: node110,,
  46. Table kylin_metadata_user is okay.
  47. Number of regions:
  48. Deployed on: node112,,
  49. Table KYLIN_18RFD2M9KD is okay.
  50. Number of regions:
  51. Deployed on: node112,,
  52. Table KYLIN_5QPRJFEZBF is okay.
  53. Number of regions:
  54. Deployed on: node111,,
  55. Table KYLIN_09YWHIEKLK is okay.
  56. Number of regions:
  57. Deployed on: node110,,
  58. Table KYLIN_ETYEUFI2WO is okay.
  59. Number of regions:
  60. Deployed on: node111,,
  61. Table KYLIN_58UKG45HEY is okay.
  62. Number of regions:
  63. Deployed on: node111,,
  64. Table salehousedeal is okay.
  65. Number of regions:
  66. Deployed on: node110,, node111,, node112,,
  67. inconsistencies detected.
  68. Status: INCONSISTENT
  69. -- ::, INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
  70. -- ::, INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x460b03fafb6004b
  71. -- ::, INFO [main] zookeeper.ZooKeeper: Session: 0x460b03fafb6004b closed
  72. -- ::, INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
  73. [hadoop@master109 bin]$ hbase hbck -fixMeta
  74. SLF4J: Class path contains multiple SLF4J bindings.
  75. SLF4J: Found binding in [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  76. SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop2./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
  77. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  78. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  79. -- ::, INFO [main] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
  80. HBaseFsck command line options: -fixMeta
  81. -- ::, WARN [main] util.HBaseFsck: Got AccessDeniedException when preCheckPermission
  82. org.apache.hadoop.hbase.security.AccessDeniedException: Permission denied: action=WRITE path=hdfs://gagcluster/hbase/.hbase-snapshot user=hadoop
  83. at org.apache.hadoop.hbase.util.FSUtils.checkAccess(FSUtils.java:)
  84. at org.apache.hadoop.hbase.util.HBaseFsck.preCheckPermission(HBaseFsck.java:)
  85. at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:)
  86. at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:)
  87. at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
  88. at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:)
  89. at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:)
  90. Current user hadoop does not have write perms to hdfs://gagcluster/hbase/.hbase-snapshot. Please rerun hbck as hdfs user root
  91. [hadoop@master109 bin]$ hadoop fs -chown -R hadoop /hbase

经历了一直权限不一致的问题,直接狠狠心全部给他改成 hadoop用户的了

再次修复

  1. [hadoop@master109 bin]$ hbase hbck -fixAssignments
  2.  
  3. ...
  4.  
  5. Table z_activeagent is okay.
  6. Number of regions:
  7. Deployed on: node110,, node111,, node112,,
  8. Status: OK
  9. -- ::, INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
  10. -- ::, INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1608aaaedc50081
  11. -- ::, INFO [main] zookeeper.ZooKeeper: Session: 0x1608aaaedc50081 closed
  12. -- ::, INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
  13. [hadoop@master109 bin]$ hbase hbck -fixMeta

重新分配一下,完事!

Hbase 一次表异常,有一张表 无法count scan 一直显示重连的更多相关文章

  1. EF CodeFirst多个数据摸型映射到一张表与各一张表

    1. 多个实体映射到一张表 Code First允许将多个实体映射到同一张表上,实体必须遵循如下规则: 实体必须是一对一关系 实体必须共享一个公共键 我们通常有这样的需求,如:同一基类派生出的不同数据 ...

  2. MYSQL如何通过一张表更新另外一张表?

    1.背景说明 很多时候我们需要通过一张中间表的数据去更新另外一张表,而不仅仅是通过固定数值去更新,尤其是当数据量很大的时候,简单的复制粘贴就不大可行了. 2.MYSQL版本 SELECT VERSIO ...

  3. mysql通过一张表更新另一张表

    在mysql中,通过一张表的列修改另一张关联表中的内容: 1:  修改1列 update student s, city c set s.city_name = c.name where s.city ...

  4. Oracle将两张表的数据插入第三张表且第三张表中不存在

    1.由于是先查再插所以不能使用insert into table1() values(), 要使用insert into table1() select * table2,不能使用values. 2. ...

  5. (转) 【oracle调优】优化全表扫---cache整张表或索引

    情景分析: 1)某查询必须要走全表扫描 2)该查询执行的频率相当高 3)对执行时间的要求也相当苛刻的话 4)数据库的IO比较吃紧 5)数据库的内存比较宽松 6)该表的大小没有大到离谱 以上情况下,可以 ...

  6. oracle根据一张表更新另外一张表

    知道是两张表进行更新,之前作过mysql的,直接就写了: update a,b set a.code = b.code wehre a.id = b.id 然后就报错了,上网查了下知道oracle不能 ...

  7. Insert select 带选择复制一张表到另一张表

    使用SELECT INTO 和 INSERT INTO SELECT 表复制语句了. 1.INSERT INTO SELECT语句 语句形式为:Insert into Table2(field1,fi ...

  8. SQL语句:一张表和另一张表的多重匹配查询

    1.两个表结构如下图 2.如何查询成如下图所示 3.SQL语句是: select id,name=stuff(( select ','+t2.name from a t1 join b t2 on c ...

  9. 查出了a表,然后对a表进行自查询,a表的别名t1,t2如同两张表,因为t1,t2查询的条件不一样,真的如同两张表,关联两张表,可以将两行或者多行数据合并成一行,不必使用wm_concat()函数。为了将t2表的数据全部查出来使用了右连接。

    with a as( select nsr.zgswj_dm, count(distinct nsr.djxh) cnt, 1 z from hx_fp.fp_ly fp, hx_dj.dj_nsrx ...

随机推荐

  1. JavaScript高级程序设计读后感(一)

    一.什么是JavaScript? 本质? 历史? 表单验证发展成为一门语言 局限性?

  2. react style: 二级菜单

    1.样式 @import "../../styles/varibles"; .app-sidebar { overflow: hidden; width: 180px; > ...

  3. flex 弹性布局的大坑!!

    如果父元素设置 display:flex,那么其中的子元素会被当成行内元素对待,即会忽略其宽度 这在设置背景图时要特别特别注意!!!!

  4. python学习之输出与文件读写

    #1. 打印字符串print ("His name is %s"%("Aviad")) #2.打印整数print ("He is %d years o ...

  5. 选择第n大的数(分治法和排列实现)

    个人心得:在买的书上看到的一个经典分治题,题目意思就是给定一个数组,求第k小的数. 第一反应就是排序,然后返回第k-1位置的数就可以了,这样算法的复杂度是nlongn,在快速排序的基础下还是挺不错的. ...

  6. var ev = ev || event

    event是事件对象(也是window的属性),但不是标准的,只有IE支持. 在W3C标准支持的浏览器下事件对象是引发事件函数的第一个参数,参数名随意. 所以,我们一般使用事件对象: function ...

  7. 两种方式创建Maven项目【方式一】

    经常使用maven进行项目的管理,今天整理两种方式创建maven项目及创建过程中碰到的问题怎么解决: 方式一: 1.新建maven项目,点击下一步. 2.勾选Create a simple proje ...

  8. oscache使用经历

    oscache作为一款老的本地缓存,应用场景主要有页面缓存和对象缓存.这里拿在maven项目中使用oscache作为对象缓存举例说明下用法: 1.导入jar包 <dependency> & ...

  9. CodeReview是开发中的重要一个环节,整理了一些关于jupiter for java

    什么是代码评审(CodeReview)? 代码评审也称代码复查,是指通过阅读代码来检查源代码与编码标准的符合性以及代码质量的活动. Jupiter提供了代码行级别的评审批注功能,方便评审参与人了解具体 ...

  10. JS Date日期

    //日期属性var td = new Date(); alert( "getDate():" +td.getDate()+"\n" + "getDay ...