PG CREATEINDEX CONCURRENTLY

官方说法

根据9.1的文档

Creating an index can interfere with regular operation of a database. Normally PostgreSQL locks the table to be indexed against writes and performs the entire index build with a single scan of the table. Other transactions can still read the table, but if they try to insert, update, or delete rows in the table they will block until the index build is finished. This could have a severe effect if the system is a live production database. Very large tables can take many hours to be indexed, and even for smaller tables, an index build can lock out writers for periods that are unacceptably long for a production system.

PostgreSQL supports building indexes without locking out writes. This method is invoked by specifying the CONCURRENTLY option of CREATE INDEX. When this option is used, PostgreSQL must perform two scans of the table, and in addition it must wait for all existing transactions that could potentially use the index to terminate. Thus this method requires more total work than a standard index build and takes significantly longer to complete. However, since it allows normal operations to continue while the index is built, this method is useful for adding new indexes in a production environment. Of course, the extra CPU and I/O load imposed by the index creation might slow other operations.

正常的create index会锁表阻止所有写操作,但是变通的方法是 create index concurrently。当然这样做会慢。PG会启动两次表扫描并且等待现存的所有事务结束。

In a concurrent index build, the index is actually entered into the system catalogs in one transaction, then two table scans occur in two more transactions. Any transaction active when the second table scan starts can block concurrent index creation until it completes, even transactions that only reference the table after the second table scan starts. Concurrent index creation serially waits for each old transaction to complete using the method outlined in section Section 45.56.

在concurrent build的过程中,索引实际上进入system catalogs,然后另外启动两个table scan。任何在第二个table scan开始时活动的事务都会block concurrent index build,甚至事务只是reference 了那个表。Concurrently index creation等待所有上述事务结束,用的方法是45.56节中。(基本上依赖pg_locks:The view pg_locks provides access to information about the locks held by open transactions within the database server. )

If a problem arises while scanning the table, such as a uniqueness violation in a unique index, the CREATE INDEX command will fail but leave behind an "invalid" index. This index will be ignored for querying purposes because it might be incomplete; however it will still consume update overhead. The psql \d command will report such an index as INVALID:

postgres=# \d tab

Table "public.tab" Column | Type | Modifiers

--------+---------+----------- col | integer | Indexes:

"idx" btree (col) INVALID

The recommended recovery method in such cases is to drop the index and try again to perform CREATE INDEX CONCURRENTLY. (Another possibility is to rebuild the index with REINDEX. However, since REINDEX does not support concurrent builds, this option is unlikely to seem attractive.)

如果build失败了,就会有一个invalid的index,这个index查询的时候用不上,但是update的时候还要更新它。推荐做法是drop之后重新create index。

Another caveat when building a unique index concurrently is that the uniqueness constraint is already being enforced against other transactions when the second table scan begins. This means that constraint violations could be reported in other queries prior to the index becoming available for use, or even in cases where the index build eventually fails. Also, if a failure does occur in the second scan, the "invalid" index continues to enforce its uniqueness constraint afterwards.

有一个问题:如果是一个unique的index的话,unique约束已经对其他事务可见了。这就意味着其他事务中可能在索引可用之前就报constraint violation,哪怕索引建失败了。另外,如果在第二次扫描的时候出错,这个invalid的索引依然会强加一个unique的constraint。

Concurrent builds of expression indexes and partial indexes are supported. Errors occurring in the evaluation of these expressions could cause behavior similar to that described above for unique constraint violations.

expression和partial index同样支持,但是还是有上述的unique的问题。

Regular index builds permit other regular index builds on the same table to occur in parallel, but only one concurrent index build can occur on a table at a time. In both cases, no other types of schema modification on the table are allowed meanwhile. Another difference is that a regular CREATE INDEX command can be performed within a transaction block, but CREATE INDEX CONCURRENTLY cannot.

正常的索引容许在同一个表上同时建多个索引,currently build的index不行,同时只容许一个。另外一个区别是create index可以在事务中但是create index concurrently不行。

上述的缺陷本质上是因为,create index concurrently 包含了事务。

代码分析

一个正常的create index流程应该是,DefineIndex->index_create->index_build。

index_build

  1. /*
  2. * index_build - invoke access-method-specific index build procedure
  3. *
  4. * On entry, the index's catalog entries are valid, and its physical disk
  5. * file has been created but is empty. We call the AM-specific build
  6. * procedure to fill in the index contents. We then update the pg_class
  7. * entries of the index and heap relation as needed, using statistics
  8. * returned by ambuild as well as data passed by the caller.
  9. *
  10. * isprimary tells whether to mark the index as a primary-key index.
  11. * isreindex indicates we are recreating a previously-existing index.
  12. *
  13. * Note: when reindexing an existing index, isprimary can be false even if
  14. * the index is a PK; it's already properly marked and need not be re-marked.
  15. *
  16. * Note: before Postgres 8.2, the passed-in heap and index Relations
  17. * were automatically closed by this routine. This is no longer the case.
  18. * The caller opened 'em, and the caller should close 'em.
  19. */
  20. void
  21. index_build(Relation heapRelation,
  22. Relation indexRelation,
  23. IndexInfo *indexInfo,
  24. bool isprimary,
  25. bool isreindex)
  26. ```
  27. 这个函数的参数大概是本文里面最少的一个,比较单纯。重要的东西见注释。
  28. ### index_create
  29. 这个函数完成create index的工作。
  30. 1. 校验错误,
  31. 2. build元数据数据,并插入pg_class. index,constraint 等元数据表。
  32. 3. 如果是正常的索引:调用index_build真的建索引;否则只更新元数据表。
  33. ### DefineIndex
  34. create index语句会辗转调到DefineIndex里面来,这个函数的参数很多,其中专门有个参数是bool concurrent。就是控制是不是concurrently create index的。实际上控制concurrently build的逻辑大部分都在这个函数里。
  35. #### 各种校验
  36. 目前没有指的关注的。
  37. #### index create
  38. 在各种校验之后,
  39. ```
  40. /*
  41. * Make the catalog entries for the index, including constraints. Then, if
  42. * not skip_build || concurrent, actually build the index.
  43. */
  44. indexRelationId =
  45. index_create(rel, indexRelationName, indexRelationId,
  46. indexInfo, indexColNames,
  47. accessMethodId, tablespaceId,
  48. collationObjectId, classObjectId,
  49. coloptions, reloptions, primary,
  50. isconstraint, deferrable, initdeferred,
  51. allowSystemTableMods,
  52. skip_build || concurrent,
  53. concurrent);
  54. ```
  55. 注意倒数第二行,暗示index_create不要真的去建索引。
  56. 后面的流程,一遍看代码一遍解释吧
  57. ```
  58. if (!concurrent)
  59. {
  60. /* Close the heap and we're done, in the non-concurrent case */
  61. heap_close(rel, NoLock);
  62. return;
  63. }
  64. ```
  65. 正常直接退出。下面都是concurrently的流程。
  66. #### index concurrently create
  67. ```
  68. /* save lockrelid and locktag for below, then close rel */
  69. heaprelid = rel->rd_lockInfo.lockRelId;
  70. SET_LOCKTAG_RELATION(heaplocktag, heaprelid.dbId, heaprelid.relId);
  71. heap_close(rel, NoLock);
  72. ```
  73. 这个时候,table的锁已经释放,其他事务不会被block住了。
  74. ```
  75. /*
  76. * For a concurrent build, it's important to make the catalog entries
  77. * visible to other transactions before we start to build the index. That
  78. * will prevent them from making incompatible HOT updates. The new index
  79. * will be marked not indisready and not indisvalid, so that no one else
  80. * tries to either insert into it or use it for queries.
  81. *
  82. * We must commit our current transaction so that the index becomes
  83. * visible; then start another. Note that all the data structures we just
  84. * built are lost in the commit. The only data we keep past here are the
  85. * relation IDs.
  86. *
  87. * Before committing, get a session-level lock on the table, to ensure
  88. * that neither it nor the index can be dropped before we finish. This
  89. * cannot block, even if someone else is waiting for access, because we
  90. * already have the same lock within our transaction.
  91. *
  92. * Note: we don't currently bother with a session lock on the index,
  93. * because there are no operations that could change its state while we
  94. * hold lock on the parent table. This might need to change later.
  95. */
  96. ```
  97. 关于上面这段注释,先解释两个词,它们不是一个英文单词,它们是pg_index的两个字段。
  98. 1. indisvalid bool If true, the index is currently valid for queries. False means the index is possibly incomplete: it must still be modified by INSERT/UPDATE operations, but it cannot safely be used for queries. If it is unique, the uniqueness property is not true either.
  99. 1. indisready bool If true, the index is currently ready for inserts. False means the index must be ignored by INSERT/UPDATE operations.
  100. 这里涉及到了session level lock的概念,这个锁不会block其他的insert update之类。
  101. ```
  102. LockRelationIdForSession(&heaprelid, ShareUpdateExclusiveLock);
  103. PopActiveSnapshot();
  104. CommitTransactionCommand();
  105. StartTransactionCommand();
  106. ```
  107. 这里是第一次commit。注意,ShareUpdateExclusiveLock不阻止insert/update/delete
  108. ```
  109. /*
  110. * Phase 2 of concurrent index build (see comments for validate_index()
  111. * for an overview of how this works)
  112. *
  113. * Now we must wait until no running transaction could have the table open
  114. * with the old list of indexes. To do this, inquire which xacts
  115. * currently would conflict with ShareLock on the table -- ie, which ones
  116. * have a lock that permits writing the table. Then wait for each of
  117. * these xacts to commit or abort. Note we do not need to worry about
  118. * xacts that open the table for writing after this point; they will see
  119. * the new index when they open it.
  120. *
  121. * Note: the reason we use actual lock acquisition here, rather than just
  122. * checking the ProcArray and sleeping, is that deadlock is possible if
  123. * one of the transactions in question is blocked trying to acquire an
  124. * exclusive lock on our table. The lock code will detect deadlock and
  125. * error out properly.
  126. *
  127. * Note: GetLockConflicts() never reports our own xid, hence we need not
  128. * check for that. Also, prepared xacts are not reported, which is fine
  129. * since they certainly aren't going to do anything more.
  130. */
  131. old_lockholders = GetLockConflicts(&heaplocktag, ShareLock);
  132. ```
  133. 取得当前正在运行的事务
  134. ```
  135. while (VirtualTransactionIdIsValid(*old_lockholders))
  136. {
  137. VirtualXactLockTableWait(*old_lockholders);
  138. old_lockholders++;
  139. }
  140. /*
  141. * At this moment we are sure that there are no transactions with the
  142. * table open for write that don't have this new index in their list of
  143. * indexes. We have waited out all the existing transactions and any new
  144. * transaction will have the new index in its list, but the index is still
  145. * marked as "not-ready-for-inserts". The index is consulted while
  146. * deciding HOT-safety though. This arrangement ensures that no new HOT
  147. * chains can be created where the new tuple and the old tuple in the
  148. * chain have different index keys.
  149. *
  150. * We now take a new snapshot, and build the index using all tuples that
  151. * are visible in this snapshot. We can be sure that any HOT updates to
  152. * these tuples will be compatible with the index, since any updates made
  153. * by transactions that didn't know about the index are now committed or
  154. * rolled back. Thus, each visible tuple is either the end of its
  155. * HOT-chain or the extension of the chain is HOT-safe for this index.
  156. */
  157. ```
  158. 仔细读注释
  159. ```
  160. /* Open and lock the parent heap relation */
  161. rel = heap_openrv(heapRelation, ShareUpdateExclusiveLock);
  162. /* And the target index relation */
  163. indexRelation = index_open(indexRelationId, RowExclusiveLock);
  164. /* Set ActiveSnapshot since functions in the indexes may need it */
  165. PushActiveSnapshot(GetTransactionSnapshot());
  166. /* We have to re-build the IndexInfo struct, since it was lost in commit */
  167. indexInfo = BuildIndexInfo(indexRelation);
  168. Assert(!indexInfo->ii_ReadyForInserts);
  169. indexInfo->ii_Concurrent = true;
  170. indexInfo->ii_BrokenHotChain = false;
  171. /* Now build the index */
  172. index_build(rel, indexRelation, indexInfo, primary, false);
  173. /* Close both the relations, but keep the locks */
  174. heap_close(rel, NoLock);
  175. index_close(indexRelation, NoLock);
  176. /*
  177. * Update the pg_index row to mark the index as ready for inserts. Once we
  178. * commit this transaction, any new transactions that open the table must
  179. * insert new entries into the index for insertions and non-HOT updates.
  180. */
  181. pg_index = heap_open(IndexRelationId, RowExclusiveLock);
  182. indexTuple = SearchSysCacheCopy1(INDEXRELID,
  183. ObjectIdGetDatum(indexRelationId));
  184. if (!HeapTupleIsValid(indexTuple))
  185. elog(ERROR, "cache lookup failed for index %u", indexRelationId);
  186. indexForm = (Form_pg_index) GETSTRUCT(indexTuple);
  187. Assert(!indexForm->indisready);
  188. Assert(!indexForm->indisvalid);
  189. indexForm->indisready = true;
  190. simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
  191. CatalogUpdateIndexes(pg_index, indexTuple);
  192. heap_close(pg_index, RowExclusiveLock);
  193. /* we can do away with our snapshot */
  194. PopActiveSnapshot();
  195. /*
  196. * Commit this transaction to make the indisready update visible.
  197. */
  198. CommitTransactionCommand();
  199. StartTransactionCommand();
  200. ```
  201. 第一次scan结束,这个时候index已经对上层可见了。
  202. ```
  203. /*
  204. * Phase 3 of concurrent index build
  205. *
  206. * We once again wait until no transaction can have the table open with
  207. * the index marked as read-only for updates.
  208. */
  209. old_lockholders = GetLockConflicts(&heaplocktag, ShareLock);
  210. while (VirtualTransactionIdIsValid(*old_lockholders))
  211. {
  212. VirtualXactLockTableWait(*old_lockholders);
  213. old_lockholders++;
  214. }
  215. ```
  216. again,等所有不认识index的事务结束。
  217. ```
  218. /*
  219. * Now take the "reference snapshot" that will be used by validate_index()
  220. * to filter candidate tuples. Beware! There might still be snapshots in
  221. * use that treat some transaction as in-progress that our reference
  222. * snapshot treats as committed. If such a recently-committed transaction
  223. * deleted tuples in the table, we will not include them in the index; yet
  224. * those transactions which see the deleting one as still-in-progress will
  225. * expect such tuples to be there once we mark the index as valid.
  226. *
  227. * We solve this by waiting for all endangered transactions to exit before
  228. * we mark the index as valid.
  229. *
  230. * We also set ActiveSnapshot to this snap, since functions in indexes may
  231. * need a snapshot.
  232. */
  233. snapshot = RegisterSnapshot(GetTransactionSnapshot());
  234. PushActiveSnapshot(snapshot);
  235. /*
  236. * Scan the index and the heap, insert any missing index entries.
  237. */
  238. validate_index(relationId, indexRelationId, snapshot);
  239. ```
  240. 第二次扫描结束,这个时候索引可用了。
  241. ```
  242. /*
  243. * The index is now valid in the sense that it contains all currently
  244. * interesting tuples. But since it might not contain tuples deleted just
  245. * before the reference snap was taken, we have to wait out any
  246. * transactions that might have older snapshots. Obtain a list of VXIDs
  247. * of such transactions, and wait for them individually.
  248. *
  249. * We can exclude any running transactions that have xmin > the xmin of
  250. * our reference snapshot; their oldest snapshot must be newer than ours.
  251. * We can also exclude any transactions that have xmin = zero, since they
  252. * evidently have no live snapshot at all (and any one they might be in
  253. * process of taking is certainly newer than ours). Transactions in other
  254. * DBs can be ignored too, since they'll never even be able to see this
  255. * index.
  256. *
  257. * We can also exclude autovacuum processes and processes running manual
  258. * lazy VACUUMs, because they won't be fazed by missing index entries
  259. * either. (Manual ANALYZEs, however, can't be excluded because they
  260. * might be within transactions that are going to do arbitrary operations
  261. * later.)
  262. *
  263. * Also, GetCurrentVirtualXIDs never reports our own vxid, so we need not
  264. * check for that.
  265. *
  266. * If a process goes idle-in-transaction with xmin zero, we do not need to
  267. * wait for it anymore, per the above argument. We do not have the
  268. * infrastructure right now to stop waiting if that happens, but we can at
  269. * least avoid the folly of waiting when it is idle at the time we would
  270. * begin to wait. We do this by repeatedly rechecking the output of
  271. * GetCurrentVirtualXIDs. If, during any iteration, a particular vxid
  272. * doesn't show up in the output, we know we can forget about it.
  273. */
  274. old_snapshots = GetCurrentVirtualXIDs(snapshot->xmin, true, false,
  275. PROC_IS_AUTOVACUUM | PROC_IN_VACUUM,
  276. &n_old_snapshots);
  277. for (i = 0; i < n_old_snapshots; i++)
  278. {
  279. if (!VirtualTransactionIdIsValid(old_snapshots[i]))
  280. continue; /* found uninteresting in previous cycle */
  281. if (i > 0)
  282. {
  283. /* see if anything's changed ... */
  284. VirtualTransactionId *newer_snapshots;
  285. int n_newer_snapshots;
  286. int j;
  287. int k;
  288. newer_snapshots = GetCurrentVirtualXIDs(snapshot->xmin,
  289. true, false,
  290. PROC_IS_AUTOVACUUM | PROC_IN_VACUUM,
  291. &n_newer_snapshots);
  292. for (j = i; j < n_old_snapshots; j++)
  293. {
  294. if (!VirtualTransactionIdIsValid(old_snapshots[j]))
  295. continue; /* found uninteresting in previous cycle */
  296. for (k = 0; k < n_newer_snapshots; k++)
  297. {
  298. if (VirtualTransactionIdEquals(old_snapshots[j],
  299. newer_snapshots[k]))
  300. break;
  301. }
  302. if (k >= n_newer_snapshots) /* not there anymore */
  303. SetInvalidVirtualTransactionId(old_snapshots[j]);
  304. }
  305. pfree(newer_snapshots);
  306. }
  307. if (VirtualTransactionIdIsValid(old_snapshots[i]))
  308. VirtualXactLockTableWait(old_snapshots[i]);
  309. }
  310. ```
  311. 请读注释,基本意思是在做validate_index,这个时间可预见的会很长,so需要修改这段时间的数据,让它正确。
  312. ```
  313. /*
  314. * Index can now be marked valid -- update its pg_index entry
  315. */
  316. pg_index = heap_open(IndexRelationId, RowExclusiveLock);
  317. indexTuple = SearchSysCacheCopy1(INDEXRELID,
  318. ObjectIdGetDatum(indexRelationId));
  319. if (!HeapTupleIsValid(indexTuple))
  320. elog(ERROR, "cache lookup failed for index %u", indexRelationId);
  321. indexForm = (Form_pg_index) GETSTRUCT(indexTuple);
  322. Assert(indexForm->indisready);
  323. Assert(!indexForm->indisvalid);
  324. indexForm->indisvalid = true;
  325. simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
  326. CatalogUpdateIndexes(pg_index, indexTuple);
  327. heap_close(pg_index, RowExclusiveLock);
  328. /*
  329. * The pg_index update will cause backends (including this one) to update
  330. * relcache entries for the index itself, but we should also send a
  331. * relcache inval on the parent table to force replanning of cached plans.
  332. * Otherwise existing sessions might fail to use the new index where it
  333. * would be useful. (Note that our earlier commits did not create reasons
  334. * to replan; relcache flush on the index itself was sufficient.)
  335. */
  336. CacheInvalidateRelcacheByRelid(heaprelid.relId);
  337. /* we can now do away with our active snapshot */
  338. PopActiveSnapshot();
  339. /* And we can remove the validating snapshot too */
  340. UnregisterSnapshot(snapshot);
  341. /*
  342. * Last thing to do is release the session-level lock on the parent table.
  343. */
  344. UnlockRelationIdForSession(&heaprelid, ShareUpdateExclusiveLock);
  345. ```
  346. 让索引对查询可见。
  347. #### validate_index
  348. ```
  349. * validate_index() works by first gathering all the TIDs currently in the
  350. * index, using a bulkdelete callback that just stores the TIDs and doesn't
  351. * ever say "delete it". (This should be faster than a plain indexscan;
  352. * also, not all index AMs support full-index indexscan.) Then we sort the
  353. * TIDs, and finally scan the table doing a "merge join" against the TID list
  354. * to see which tuples are missing from the index. Thus we will ensure that
  355. * all tuples valid according to the reference snapshot are in the index.
  356. *
  357. * Building a unique index this way is tricky: we might try to insert a
  358. * tuple that is already dead or is in process of being deleted, and we
  359. * mustn't have a uniqueness failure against an updated version of the same
  360. * row. We could try to check the tuple to see if it's already dead and tell
  361. * index_insert() not to do the uniqueness check, but that still leaves us
  362. * with a race condition against an in-progress update. To handle that,
  363. * we expect the index AM to recheck liveness of the to-be-inserted tuple
  364. * before it declares a uniqueness error.
  365. ```
  366. 如注释所示,第二次扫面,实际上执行了一个tid的收集以及一个merge join操作。
  367. ## 实验
  368. 本实验的目的在于测试,concurrent对性能的影响
  369. ### 机器配置
  370. 普通台式机,PG 8.4,默认配置。
  371. ### 数据格式
  372. ```sql
  373. CREATE TABLE test2 (
  374. did integer PRIMARY KEY,
  375. val float,
  376. name varchar(40)
  377. );
  378. insert into test2 select generate_series(1, 10000000), random()*100., md5(random()::text);
  379. create index i2v on test2(val);
  380. create index concurrently i2vc on test2(val);
  381. ```
  382. ### 结果
  383. | 元组个数| val | concurrently on value | name | concurrently on name |
  384. | :--: | :--:| :--: | :--: | :--: |
  385. | 1,000,000 | 4597.145 ms | 8678.217 ms | 13179.993 ms | 17327.401 ms |
  386. | 10,000,000 | 58315.700 ms | 102393.426 ms |164888.703 | 208067.063 ms |
  387. | 20,000,000 | 117594.772 ms | 235908.396 ms | 167957.984 ms | 488424.852 ms |
  388. ### 小结
  389. 在一下条件下:
  390. 1. 上述数据的取得是在没有查询的情况下得出的。
  391. 1. 平均来看,使用concurrently create index的时间是不适用concurently build时间的2倍上下。

PG CREATEINDEX CONCURRENTLY的更多相关文章

  1. 简析服务端通过GT导入SHP至PG的方法

    文章版权由作者李晓晖和博客园共有,若转载请于明显处标明出处:http://www.cnblogs.com/naaoveGIS/ 1.背景 项目中需要在浏览器端直接上传SHP后服务端进行数据的自动入PG ...

  2. PG 中 JSON 字段的应用

    13 年发现 pg 有了 json 类型,便从 oracle 转 pg,几年下来也算比较熟稔了,总结几个有益的实践. 用途一:存储设计时无法预料的文档性的数据.比如,通常可以在人员表准备一个 json ...

  3. pg gem 安装(postgresql94)

    使用下面命令安装报错 gem install pg 错误: [root@AS-test middle_database]# gem install pgBuilding native extensio ...

  4. #pg学习#postgresql的安装

    1.按照官网给的步骤编译安装(Mac安装是比较容易的,相比Liunx) cd /Users/renlipeng/Desktop/postgresql-9.5.1 ./configure --prefi ...

  5. PG 函数的易变性(Function Volatility Categories)

    此概念的接触是在做分区表的时候碰到的,分区表按时间字段分区,在查询时当where条件中时间为now()或者current_time()等时是无法查询的,即使进行格式转换也不行,只有是时间格式如‘201 ...

  6. mysql 序列与pg序列的比较

    mysql序列(这里只谈innodb引擎): 在使用mysql的AUTO_INCREMENT时,使用AUTO_INCREMENT的字段必须建有索引,也可以为索引的一部分.当没有索引时会报错:      ...

  7. 使用zfs进行pg的pitr恢复测试

    前段时间做了一下zfs做pg的增量恢复测试,mark一下. 服务器信息: 主机:192.168.173.43 备机:192.168.173.41 主备使用流复制搭建,在备机上面进行了zfs快照备份. ...

  8. PG, Pool之间的一些数量关系

    先说一下我的环境: Ceph cluster中包含6台OSD节点 (osd.0 - 5), 一共有10个Pool (0 - 9), 这些Pool共享了144个PG (这个数字是所有Pool的PG_SI ...

  9. ruby on rails gem install pg时无法安装

    gem install pg -v '0.18.2' Building native extensions. This could take a while... ERROR: Error insta ...

随机推荐

  1. ThinkPHP中的动态缓存(S方法)和快速缓存(F方法)

    系统默认的缓存方式是采用File方式缓存,我们可以在项目配置文件里面定义其他的缓存方式,例如,修改默认的缓存方式为Xcache(当然,你的环境需要支持Xcache)    对于File方式缓存下的缓存 ...

  2. 关于flume配置加载

    最近项目在用到flume,因此翻了下flume的代码, 启动脚本: nohup bin/flume-ng agent -n tsdbflume -c conf -f conf/配置文件.conf -D ...

  3. Map以及Set的遍历(EntrySet方法,补充enumeration和Iterator的区别)

    public void mearge(Map map) { Map returnMap = new HashMap<>(); // 转换为Entry Set<Map.Entry< ...

  4. day1作业--登录入口

    作业概述: 编写一个登录入口,实现如下功能: (1)输入用户名和密码 (2)认证成功后显示欢迎信息 (3)输错三次后锁定 流程图: readme: 1.程序配置文件: 黑名单文件blacklist.t ...

  5. XML编程知识点总结

    DOM和SAX DOM的全称是Document Object Model,也即文档对象模型.基于DOM的XML分析器将一个XML文档转换成一个对象模型的集合,应用程序挣是通过对这个对象模型的操作,来实 ...

  6. OAF_开发系列26_实现OAF中Java类型并发程式开发oracle.apps.fnd.cp.request(案例)

    20150730 Created By BaoXinjian

  7. Python 之 时间字符串、时间戳、时间差、任意时间字符串转换时间对象

    1. 时间字符串 --> 时间戳 1) time 模块 timestring = '2016-12-21 10:22:56' print time.mktime(time.strptime(ti ...

  8. VS2013各个版本秘钥

    Visual Studio Ultimate 2013 KEY(密钥):BWG7X-J98B3-W34RT-33B3R-JVYW9 Visual Studio Premium 2013 KEY(密钥) ...

  9. C++ 队列的实现

    /************************************************************************/ /* 实现一个通用同步队列 使用链表实现队列 (先 ...

  10. Map接口

    Map实现的包括HashMap 和TreeMap .建议使用HashMap ,效率更高.并且允许使用null值,单是必须保证键的唯一性,TreeMap不允许有空.在添加删除和定位映射关系的时候不如Ha ...