1. General

1.1 /proc/meminfo

/proc/meminfo是了解Linux系统内存使用状况主要接口,也是free等命令的数据来源。

下面是cat /proc/meminfo的一个实例。

  1. MemTotal: 8054880 kB---------------------物理内存总容量,对应totalram_pages大小。
  2. MemFree: 4004312 kB---------------------空闲内存容量,对应vm_stat[NR_FREE_PAGES]大小。
  3. MemAvailable: 5678888 kB---------------------MemFree减去保留内存,加上部分pagecache和部分SReclaimable
  4. Buffers: 303016 kB---------------------块设备缓冲区大小.
  5. Cached: 2029616 kB---------------------主要是vm_stat[NR_FILE_PAGES],再减去swap出的大小和块设备缓冲区大小。Buffers+Cached=Active(file)+Inactive(file)+Shmem
  6. SwapCached: kB---------------------交换缓存上的内容容量。
  7. Active: kB---------------------Active=Active(anon)+Active(file)。
  8. Inactive: kB---------------------Inactive=Inactive(anon)+Inactive(file)。
  9. Active(anon): kB---------------------活动匿名内存,匿名指进程中堆上分配的内存,活动指最近被使用的内存。
  10. Inactive(anon): kB---------------------不活动匿名内存,在内存不足时优先释放。
  11. Active(file): kB---------------------活动文件缓存,表示内存内容与磁盘上文件相关联。
  12. Inactive(file): kB---------------------不活动文件缓存。
  13. Unevictable: kB---------------------不可移动的内存,当然也不可释放,所以不会放在LRU中。
  14. Mlocked: kB---------------------使用mlocked()处理的页面。
  15. SwapTotal: kB---------------------交换空间总容量。
  16. SwapFree: kB---------------------交换空间剩余容量。
  17. Dirty: kB---------------------脏数据,在磁盘缓冲区中尚未写入磁盘的内存大小。
  18. Writeback: kB---------------------待回写的页面大小。
  19. AnonPages: kB---------------------内核中存在一个rmap(Reverse Mapping)机制,负责管理匿名内存中每一个物理页面映射到哪个进程的那个逻辑地址等信息。rmap中记录的内存页综合就是AnonPages值。
  20. Mapped: kB---------------------映射的文件占用内存大小。
  21. Shmem: kB---------------------vm_stat[NR_SHMEM],tmpfs所使用的内存,tmpfs即利用物理内存来提供RAM磁盘功能。在tmpfa上保存文件时,文件系统暂时将他们保存到RAM中。
  22. Slab: kB---------------------slab分配器总量,通过slabinfo工具或者/proc/slabinfo来查看更详细的信息。
  23. SReclaimable: kB---------------------不存在活跃对象,可回收的slab缓存vm_stat[NR_SLAB_RECLAIMABLE]。
  24. SUnreclaim: kB---------------------对象处于活跃状态,不能被回收的slab容量。
  25. KernelStack: kB---------------------内核代码使用的堆栈区。
  26. PageTables: kB---------------------PageTables就是页表,用于存储各个用户进程的逻辑地址和物理地址的变化关系,本身也是一个内存区域。
  27. NFS_Unstable: kB
  28. Bounce: kB
  29. WritebackTmp: kB
  30. CommitLimit: kB
  31. Committed_AS: kB
  32. VmallocTotal: kB------------------理论上内核可以用来映射的逻辑地址范围。
  33. VmallocUsed: kB---------------------内核将空闲内存页。
  34. VmallocChunk: kB
  35. HardwareCorrupted: kB
  36. AnonHugePages: kB
  37. ShmemHugePages: kB
  38. ShmemPmdMapped: kB
  39. CmaTotal: kB
  40. CmaFree: kB
  41. HugePages_Total:
  42. HugePages_Free:
  43. HugePages_Rsvd:
  44. HugePages_Surp:
  45. Hugepagesize: kB
  46. DirectMap4k: kB
  47. DirectMap2M: kB
  48. DirectMap1G: kB

/proc/meminfo对应内核的核心函数是meminfo_proc_show(), 包括两个重要的填充sysinfo的函数si_meminfo()和si_swapinfo()。

MemTotal是系统从加电开始到引导完成,除去kernel本身要占用一些内存,最后剩下可供kernel支配的内存。

MemFree表示系统尚未使用的内存;MemAvailable表示系统可用内存,因为应用会根据系统可用内存大小动态调整申请内存大小,MemFree并不适用,因为有些内存是可以回收的,所以这部分内存要加上可回收内存。

PageTables用于将内存的虚拟地址翻译成物理地址,随着内存地址分配的越来越多,PageTable会增大。/proc/meminfo中的PageTables就是统计PageTable所占用内存大小。

KernelStack是常驻内存的,既不包括在LRU链表中,也不包括在进程RSS、PSS中,所以认为它是内核消耗的内存。

  1. static int meminfo_proc_show(struct seq_file *m, void *v)
  2. {
  3. struct sysinfo i;
  4. unsigned long committed;
  5. long cached;
  6. long available;
  7. unsigned long pagecache;
  8. unsigned long wmark_low = ;
  9. unsigned long pages[NR_LRU_LISTS];
  10. struct zone *zone;
  11. int lru;
  12.  
  13. /*
  14. * display in kilobytes.
  15. */
  16. #define K(x) ((x) << (PAGE_SHIFT - 10))
    si_meminfo(&i);
  17. si_swapinfo(&i);
  18. committed = percpu_counter_read_positive(&vm_committed_as);
  19.  
  20. cached = global_page_state(NR_FILE_PAGES) -
  21. total_swapcache_pages() - i.bufferram;---------------------vm_stat[NR_FILE_PAGES]减去swap的页面和块设备缓存页面。
  22. if (cached < )
  23. cached = ;
  24.  
  25. for (lru = LRU_BASE; lru < NR_LRU_LISTS; lru++)
  26. pages[lru] = global_page_state(NR_LRU_BASE + lru);--------------遍历获取vm_stat中的5LRU页面大小。
  27.  
  28. for_each_zone(zone)
  29. wmark_low += zone->watermark[WMARK_LOW];
  30.  
  31. /*
  32. * Estimate the amount of memory available for userspace allocations,
  33. * without causing swapping.
  34. */
  35. available = i.freeram - totalreserve_pages;--------------------------vm_stat[NR_FREE_PAGES]减去保留页面totalreserve_pages
  36.  
  37. /*
  38. * Not all the page cache can be freed, otherwise the system will
  39. * start swapping. Assume at least half of the page cache, or the
  40. * low watermark worth of cache, needs to stay.
  41. */
  42. pagecache = pages[LRU_ACTIVE_FILE] + pages[LRU_INACTIVE_FILE];------pagecache包括活跃和不活跃文件LRU页面两部分。
  43. pagecache -= min(pagecache / , wmark_low);-------------------------保留min(pagecache/2, wmark_low)大小,确保不会被释放。
  44. available += pagecache;---------------------------------------------可用页面增加可释放的pagecache部分。
  45.  
  46. /*
  47. * Part of the reclaimable slab consists of items that are in use,
  48. * and cannot be freed. Cap this estimate at the low watermark.
  49. */
  50. available += global_page_state(NR_SLAB_RECLAIMABLE) -
  51. min(global_page_state(NR_SLAB_RECLAIMABLE) / , wmark_low);--类似pagecache,可回收slab缓存保留一部分不可释放。其余部分给available
  52.  
  53. if (available < )
  54. available = ;
  55.  
  56. /*
  57. * Tagged format, for easy grepping and expansion.
  58. */
  59. seq_printf(m,
  60. "MemTotal: %8lu kB\n"
  61. "MemFree: %8lu kB\n"
  62. "MemAvailable: %8lu kB\n"
  63. "Buffers: %8lu kB\n"
  64. "Cached: %8lu kB\n"
  65. "SwapCached: %8lu kB\n"
  66. "Active: %8lu kB\n"
  67. "Inactive: %8lu kB\n"
  68. "Active(anon): %8lu kB\n"
  69. "Inactive(anon): %8lu kB\n"
  70. "Active(file): %8lu kB\n"
  71. "Inactive(file): %8lu kB\n"
  72. "Unevictable: %8lu kB\n"
  73. "Mlocked: %8lu kB\n"
  74. #ifdef CONFIG_HIGHMEM
  75. "HighTotal: %8lu kB\n"
  76. "HighFree: %8lu kB\n"
  77. "LowTotal: %8lu kB\n"
  78. "LowFree: %8lu kB\n"
  79. #endif
  80. #ifndef CONFIG_MMU
  81. "MmapCopy: %8lu kB\n"
  82. #endif
  83. "SwapTotal: %8lu kB\n"
  84. "SwapFree: %8lu kB\n"
  85. "Dirty: %8lu kB\n"
  86. "Writeback: %8lu kB\n"
  87. "AnonPages: %8lu kB\n"
  88. "Mapped: %8lu kB\n"
  89. "Shmem: %8lu kB\n"
  90. "Slab: %8lu kB\n"
  91. "SReclaimable: %8lu kB\n"
  92. "SUnreclaim: %8lu kB\n"
  93. "KernelStack: %8lu kB\n"
  94. "PageTables: %8lu kB\n"
  95. #ifdef CONFIG_QUICKLIST
  96. "Quicklists: %8lu kB\n"
  97. #endif
  98. "NFS_Unstable: %8lu kB\n"
  99. "Bounce: %8lu kB\n"
  100. "WritebackTmp: %8lu kB\n"
  101. "CommitLimit: %8lu kB\n"
  102. "Committed_AS: %8lu kB\n"
  103. "VmallocTotal: %8lu kB\n"
  104. "VmallocUsed: %8lu kB\n"
  105. "VmallocChunk: %8lu kB\n"
  106. #ifdef CONFIG_MEMORY_FAILURE
  107. "HardwareCorrupted: %5lu kB\n"
  108. #endif
  109. #ifdef CONFIG_TRANSPARENT_HUGEPAGE
  110. "AnonHugePages: %8lu kB\n"
  111. #endif
  112. #ifdef CONFIG_CMA
  113. "CmaTotal: %8lu kB\n"
  114. "CmaFree: %8lu kB\n"
  115. #endif
  116. ,
  117. K(i.totalram),-------------------------------------------------即totalram_pages大小
  118. K(i.freeram),--------------------------------------------------即vm_stat[NR_FREE_PAGES]
  119. K(available),--------------------------------------------------等于freeram减去保留totalreserve_pages,以及一部分pagecache和可回收slab缓存。
  120. K(i.bufferram),------------------------------------------------通过nr_blockdev_pages()获取。
  121. K(cached),-----------------------------------------------------vm_stat[NR_FILE_PAGES]减去swap部分以及块设备缓存。
  122. K(total_swapcache_pages()),------------------------------------swap交换占用的页面大小。
  123. K(pages[LRU_ACTIVE_ANON] + pages[LRU_ACTIVE_FILE]),----------活跃页面大小
  124. K(pages[LRU_INACTIVE_ANON] + pages[LRU_INACTIVE_FILE]),--------不活跃页面大小
  125. K(pages[LRU_ACTIVE_ANON]),
  126. K(pages[LRU_INACTIVE_ANON]),
  127. K(pages[LRU_ACTIVE_FILE]),
  128. K(pages[LRU_INACTIVE_FILE]),
  129. K(pages[LRU_UNEVICTABLE]),-------------------------------------不能被pageout/swapout的内存页面
  130. K(global_page_state(NR_MLOCK)),
  131. #ifdef CONFIG_HIGHMEM
  132. K(i.totalhigh),
  133. K(i.freehigh),
  134. K(i.totalram-i.totalhigh),
  135. K(i.freeram-i.freehigh),
  136. #endif
  137. #ifndef CONFIG_MMU
  138. K((unsigned long) atomic_long_read(&mmap_pages_allocated)),
  139. #endif
  140. K(i.totalswap),------------------------------------------------总swap空间大小
  141. K(i.freeswap),-------------------------------------------------空闲swap空间大小
  142. K(global_page_state(NR_FILE_DIRTY)),---------------------------等待被写回磁盘文件大小
  143. K(global_page_state(NR_WRITEBACK)),----------------------------正在被回写文件的大小
  144. K(global_page_state(NR_ANON_PAGES)),---------------------------映射的匿名页面
  145. K(global_page_state(NR_FILE_MAPPED)),--------------------------映射的文件页面
  146. K(i.sharedram),------------------------------------------------即vm_stat[NR_SHMEM]
  147. K(global_page_state(NR_SLAB_RECLAIMABLE) +
  148. global_page_state(NR_SLAB_UNRECLAIMABLE)),-------------slab缓存包括可回收和不可回收两部分,vm_stat[NR_SLAB_RECLAIMABLE]+vm_stat[NR_SLAB_UNRECLAIMABLE]。
  149. K(global_page_state(NR_SLAB_RECLAIMABLE)),
  150. K(global_page_state(NR_SLAB_UNRECLAIMABLE)),
  151. global_page_state(NR_KERNEL_STACK) * THREAD_SIZE / ,-------vm_stat[NR_KERNEL_STACK]大小
  152. K(global_page_state(NR_PAGETABLE)),----------------------------pagetables所占大小
  153. #ifdef CONFIG_QUICKLIST
  154. K(quicklist_total_size()),
  155. #endif
  156. K(global_page_state(NR_UNSTABLE_NFS)),
  157. K(global_page_state(NR_BOUNCE)),
  158. K(global_page_state(NR_WRITEBACK_TEMP)),
  159. K(vm_commit_limit()),
  160. K(committed),
  161. (unsigned long)VMALLOC_TOTAL >> ,----------------------------vmalloc虚拟空间的大小
  162. 0ul, // used to be vmalloc 'used'
  163. 0ul // used to be vmalloc 'largest_chunk'
  164. #ifdef CONFIG_MEMORY_FAILURE
  165. , atomic_long_read(&num_poisoned_pages) << (PAGE_SHIFT - )
  166. #endif
  167. #ifdef CONFIG_TRANSPARENT_HUGEPAGE
  168. , K(global_page_state(NR_ANON_TRANSPARENT_HUGEPAGES) *
  169. HPAGE_PMD_NR)
  170. #endif
  171. #ifdef CONFIG_CMA
  172. , K(totalcma_pages)
  173. , K(global_page_state(NR_FREE_CMA_PAGES))
  174. #endif
  175. );
  176.  
  177. hugetlb_report_meminfo(m);
  178.  
  179. arch_report_meminfo(m);
  180.  
  181. return ;
  182. #undef K
  183. }
  184.  
  185. void si_meminfo(struct sysinfo *val)
  186. {
  187. val->totalram = totalram_pages;
  188. val->sharedram = global_page_state(NR_SHMEM);
  189. val->freeram = global_page_state(NR_FREE_PAGES);
  190. val->bufferram = nr_blockdev_pages();
  191. val->totalhigh = totalhigh_pages;
  192. val->freehigh = nr_free_highpages();
  193. val->mem_unit = PAGE_SIZE;
  194. }
  195.  
  196. void si_swapinfo(struct sysinfo *val)
  197. {
  198. unsigned int type;
  199. unsigned long nr_to_be_unused = ;
  200.  
  201. spin_lock(&swap_lock);
  202. for (type = ; type < nr_swapfiles; type++) {
  203. struct swap_info_struct *si = swap_info[type];
  204.  
  205. if ((si->flags & SWP_USED) && !(si->flags & SWP_WRITEOK))
  206. nr_to_be_unused += si->inuse_pages;
  207. }
  208. val->freeswap = atomic_long_read(&nr_swap_pages) + nr_to_be_unused;
  209. val->totalswap = total_swap_pages + nr_to_be_unused;
  210. spin_unlock(&swap_lock);
  211. }

参考文档:《/PROC/MEMINFO之谜

1.2 free

free命令用来显示内存的使用情况。

free -s 2 -c 2 -w  -t -h

含义为-s 每2秒显示一次,-c 共2次,-w buff/cache分开显示,-t 显示total,-h 可读性更高。

结果如下:

  1. total used free shared buffers cache available
  2. Mem: .7G .4G .8G 534M 295M .1G .4G
  3. Swap: .5G 0B .5G
  4. Total: 15G .4G 11G
  5.  
  6. total used free shared buffers cache available
  7. Mem: .7G .4G .8G 537M 295M .1G .4G
  8. Swap: .5G 0B .5G
  9. Total: 15G .4G 11G

Mem一行指的是RAM的使用情况,Swap一行是交换分区的使用情况。

free命令是procps-ng包的一部分,主体在free.c中。这些参数的获取在meminfo()中进行。

  1. int main(int argc, char **argv)
  2. {
  3. ...
  4. do {
  5.  
  6. meminfo();
  7. /* Translation Hint: You can use 9 character words in
  8. * the header, and the words need to be right align to
  9. * beginning of a number. */
  10. if (flags & FREE_WIDE) {
  11. printf(_(" total used free shared buffers cache available"));
  12. } else {
  13. printf(_(" total used free shared buff/cache available"));
  14. }
  15. printf("\n");
  16. printf("%-7s", _("Mem:"));
  17. printf(" %11s", scale_size(kb_main_total, flags, args));
  18. printf(" %11s", scale_size(kb_main_used, flags, args));
  19. printf(" %11s", scale_size(kb_main_free, flags, args));
  20. printf(" %11s", scale_size(kb_main_shared, flags, args));
  21. if (flags & FREE_WIDE) {
  22. printf(" %11s", scale_size(kb_main_buffers, flags, args));
  23. printf(" %11s", scale_size(kb_main_cached, flags, args));
  24. } else {
  25. printf(" %11s", scale_size(kb_main_buffers+kb_main_cached, flags, args));
  26. }
  27. printf(" %11s", scale_size(kb_main_available, flags, args));
  28. printf("\n");
  29. ...
  30. printf("%-7s", _("Swap:"));
  31. printf(" %11s", scale_size(kb_swap_total, flags, args));
  32. printf(" %11s", scale_size(kb_swap_used, flags, args));
  33. printf(" %11s", scale_size(kb_swap_free, flags, args));
  34. printf("\n");
  35.  
  36. if (flags & FREE_TOTAL) {
  37. printf("%-7s", _("Total:"));
  38. printf(" %11s", scale_size(kb_main_total + kb_swap_total, flags, args));
  39. printf(" %11s", scale_size(kb_main_used + kb_swap_used, flags, args));
  40. printf(" %11s", scale_size(kb_main_free + kb_swap_free, flags, args));
  41. printf("\n");
  42. }
  43. fflush(stdout);
  44. if (flags & FREE_REPEATCOUNT) {
  45. args.repeat_counter--;
  46. if (args.repeat_counter < )
  47. exit(EXIT_SUCCESS);
  48. }
  49. if (flags & FREE_REPEAT) {
  50. printf("\n");
  51. usleep(args.repeat_interval);
  52. }
  53. } while ((flags & FREE_REPEAT));
  54.  
  55. exit(EXIT_SUCCESS);
  56. }

解析部分在sysinfo.c中。通过解析/proc/meminfo信息,计算出free的各项值。

/proc/meminfo和free的对应关系如下:

free /proc/meminfo
total =MemTotal
used =MemTotal - MemFree - (Cached + SReclaimable) - Buffers
free =MemFree
shared =Shmem
buffers =Buffers
cache =Cached + SReclaimable
available =MemAvailable
  1. void meminfo(void){
  2. char namebuf[]; /* big enough to hold any row name */
  3. int linux_version_code = procps_linux_version();
  4. mem_table_struct findme = { namebuf, NULL};
  5. mem_table_struct *found;
  6. char *head;
  7. char *tail;
  8. static const mem_table_struct mem_table[] = {
  9. {"Active", &kb_active}, // important
  10. {"Active(file)", &kb_active_file},
  11. {"AnonPages", &kb_anon_pages},
  12. {"Bounce", &kb_bounce},
  13. {"Buffers", &kb_main_buffers}, // important
  14. {"Cached", &kb_page_cache}, // important
  15. {"CommitLimit", &kb_commit_limit},
  16. {"Committed_AS", &kb_committed_as},
  17. {"Dirty", &kb_dirty}, // kB version of vmstat nr_dirty
  18. {"HighFree", &kb_high_free},
  19. {"HighTotal", &kb_high_total},
  20. {"Inact_clean", &kb_inact_clean},
  21. {"Inact_dirty", &kb_inact_dirty},
  22. {"Inact_laundry",&kb_inact_laundry},
  23. {"Inact_target", &kb_inact_target},
  24. {"Inactive", &kb_inactive}, // important
  25. {"Inactive(file)",&kb_inactive_file},
  26. {"LowFree", &kb_low_free},
  27. {"LowTotal", &kb_low_total},
  28. {"Mapped", &kb_mapped}, // kB version of vmstat nr_mapped
  29. {"MemAvailable", &kb_main_available}, // important
  30. {"MemFree", &kb_main_free}, // important
  31. {"MemTotal", &kb_main_total}, // important
  32. {"NFS_Unstable", &kb_nfs_unstable},
  33. {"PageTables", &kb_pagetables}, // kB version of vmstat nr_page_table_pages
  34. {"ReverseMaps", &nr_reversemaps}, // same as vmstat nr_page_table_pages
  35. {"SReclaimable", &kb_slab_reclaimable}, // "slab reclaimable" (dentry and inode structures)
  36. {"SUnreclaim", &kb_slab_unreclaimable},
  37. {"Shmem", &kb_main_shared}, // kernel 2.6.32 and later
  38. {"Slab", &kb_slab}, // kB version of vmstat nr_slab
  39. {"SwapCached", &kb_swap_cached},
  40. {"SwapFree", &kb_swap_free}, // important
  41. {"SwapTotal", &kb_swap_total}, // important
  42. {"VmallocChunk", &kb_vmalloc_chunk},
  43. {"VmallocTotal", &kb_vmalloc_total},
  44. {"VmallocUsed", &kb_vmalloc_used},
  45. {"Writeback", &kb_writeback}, // kB version of vmstat nr_writeback
  46. };
  47. const int mem_table_count = sizeof(mem_table)/sizeof(mem_table_struct);
  48. unsigned long watermark_low;
  49. signed long mem_available, mem_used;
  50.  
  51. FILE_TO_BUF(MEMINFO_FILE,meminfo_fd);
  52.  
  53. kb_inactive = ~0UL;
  54. kb_low_total = kb_main_available = ;
  55.  
  56. head = buf;
  57. for(;;){
  58. tail = strchr(head, ':');
  59. if(!tail) break;
  60. *tail = '\0';
  61. if(strlen(head) >= sizeof(namebuf)){
  62. head = tail+;
  63. goto nextline;
  64. }
  65. strcpy(namebuf,head);
  66. found = bsearch(&findme, mem_table, mem_table_count,
  67. sizeof(mem_table_struct), compare_mem_table_structs
  68. );
  69. head = tail+;
  70. if(!found) goto nextline;
  71. *(found->slot) = (unsigned long)strtoull(head,&tail,);
  72. nextline:
  73. tail = strchr(head, '\n');
  74. if(!tail) break;
  75. head = tail+;
  76. }
  77. if(!kb_low_total){ /* low==main except with large-memory support */
  78. kb_low_total = kb_main_total;
  79. kb_low_free = kb_main_free;
  80. }
  81. if(kb_inactive==~0UL){
  82. kb_inactive = kb_inact_dirty + kb_inact_clean + kb_inact_laundry;
  83. }
  84. kb_main_cached = kb_page_cache + kb_slab_reclaimable;
  85. kb_swap_used = kb_swap_total - kb_swap_free;
  86.  
  87. /* if kb_main_available is greater than kb_main_total or our calculation of
  88. mem_used overflows, that's symptomatic of running within a lxc container
  89. where such values will be dramatically distorted over those of the host. */
  90. if (kb_main_available > kb_main_total)
  91. kb_main_available = kb_main_free;
  92. mem_used = kb_main_total - kb_main_free - kb_main_cached - kb_main_buffers;
  93. if (mem_used < )
  94. mem_used = kb_main_total - kb_main_free;
  95. kb_main_used = (unsigned long)mem_used;----------------------------------kb_main_usedMemTotal - MemFree - (Cached + SReclaimable) - Buffers
  96.  
  97. /* zero? might need fallback for 2.6.27 <= kernel <? 3.14 */
  98. if (!kb_main_available) {
  99. #ifdef __linux__
  100. if (linux_version_code < LINUX_VERSION(, , ))
  101. kb_main_available = kb_main_free;
  102. else {
  103. FILE_TO_BUF(VM_MIN_FREE_FILE, vm_min_free_fd);
  104. kb_min_free = (unsigned long) strtoull(buf,&tail,);
  105.  
  106. watermark_low = kb_min_free * / ; /* should be equal to sum of all 'low' fields in /proc/zoneinfo */
  107.  
  108. mem_available = (signed long)kb_main_free - watermark_low
  109. + kb_inactive_file + kb_active_file - MIN((kb_inactive_file + kb_active_file) / , watermark_low)
  110. + kb_slab_reclaimable - MIN(kb_slab_reclaimable / , watermark_low);
  111.  
  112. if (mem_available < ) mem_available = ;
  113. kb_main_available = (unsigned long)mem_available;
  114. }
  115. #else
  116. kb_main_available = kb_main_free;
  117. #endif /* linux */
  118. }
  119. }

1.3 /proc/buddyinfo

/proc/buddyinfo显示Linux buddy系统空闲物理内存使用情况,行为内存节点不同zone,列为不同order。

  1. Node , zone DMA
  2. Node , zone DMA32
  3. Node , zone Normal

buddyinfo中的Node0表示节点ID,而每个节点下的内存设备又可以划分为多个内存区域。每列的值表示当前节点当前zone中的空闲连续页面数量。

  1. static void frag_show_print(struct seq_file *m, pg_data_t *pgdat,
  2. struct zone *zone)
  3. {
  4. int order;
  5.  
  6. seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name);
  7. for (order = ; order < MAX_ORDER; ++order)
  8. seq_printf(m, "%6lu ", zone->free_area[order].nr_free);-----------打印当前zone不同order的空闲数目
  9. seq_putc(m, '\n');
  10. }
  11.  
  12. /*
  13. * This walks the free areas for each zone.
  14. */
  15. static int frag_show(struct seq_file *m, void *arg)
  16. {
  17. pg_data_t *pgdat = (pg_data_t *)arg;
  18. walk_zones_in_node(m, pgdat, frag_show_print);------------------------walk_zones_in_node()遍历当前节点pgdat里面所有的zone
  19. return ;
  20. }

1.4 /proc/pagetypeinfo

pagetypeinfo比buggyinfo更加详细,更进一步将页面按照不同类型划分。

pagetypeinfo分为三部分:pageblock介数、不同节点不同zone不同页面类型不同介空闲数、

  1. Page block order:
  2. Pages per block: 512-------------------------------------------------------------------------------------------------------------一个pageblock占用多少个页面
  3.  
  4. Free pages count per migrate type at order ---------这个部分是空闲的连续个order介数页面数量
  5. Node , zone DMA, type Unmovable
  6. Node , zone DMA, type Movable
  7. Node , zone DMA, type Reclaimable
  8. Node , zone DMA, type HighAtomic
  9. Node , zone DMA, type CMA
  10. Node , zone DMA, type Isolate
  11. Node , zone DMA32, type Unmovable
  12. Node , zone DMA32, type Movable
  13. Node , zone DMA32, type Reclaimable
  14. Node , zone DMA32, type HighAtomic
  15. Node , zone DMA32, type CMA
  16. Node , zone DMA32, type Isolate
  17. Node , zone Normal, type Unmovable
  18. Node , zone Normal, type Movable
  19. Node , zone Normal, type Reclaimable
  20. Node , zone Normal, type HighAtomic
  21. Node , zone Normal, type CMA
  22. Node , zone Normal, type Isolate
  23.  
  24. Number of blocks type Unmovable Movable Reclaimable HighAtomic CMA Isolate -----------------------------这里是pageblock的数目,pageblock的大小在第一部分确定。
  25. Node , zone DMA
  26. Node , zone DMA32
  27. Node , zone Normal

第三部分减去第二部分就是被使用掉的页面数量。

下面是核心代码:

  1. static int pagetypeinfo_show(struct seq_file *m, void *arg)
  2. {
  3. pg_data_t *pgdat = (pg_data_t *)arg;
  4.  
  5. /* check memoryless node */
  6. if (!node_state(pgdat->node_id, N_MEMORY))
  7. return ;
  8.  
  9. seq_printf(m, "Page block order: %d\n", pageblock_order);
  10. seq_printf(m, "Pages per block: %lu\n", pageblock_nr_pages);
  11. seq_putc(m, '\n');
  12. pagetypeinfo_showfree(m, pgdat);
  13. pagetypeinfo_showblockcount(m, pgdat);
  14. pagetypeinfo_showmixedcount(m, pgdat);
  15.  
  16. return ;
  17. }
  18.  
  19. /* Print out the free pages at each order for each migatetype */
  20. static int pagetypeinfo_showfree(struct seq_file *m, void *arg)
  21. {
  22. int order;
  23. pg_data_t *pgdat = (pg_data_t *)arg;
  24.  
  25. /* Print header */
  26. seq_printf(m, "%-43s ", "Free pages count per migrate type at order");
  27. for (order = ; order < MAX_ORDER; ++order)
  28. seq_printf(m, "%6d ", order);
  29. seq_putc(m, '\n');
  30.  
  31. walk_zones_in_node(m, pgdat, pagetypeinfo_showfree_print);-----------------------遍历当前节点的不同zone
  32.  
  33. return ;
  34. }
  35.  
  36. static void pagetypeinfo_showfree_print(struct seq_file *m,
  37. pg_data_t *pgdat, struct zone *zone)
  38. {
  39. int order, mtype;
  40.  
  41. for (mtype = ; mtype < MIGRATE_TYPES; mtype++) {--------------------------------当前zone的不同页面类型,包括MIGRATE_UNMOVABLEMIGRATE_MOVABLEMIGRATE_RECLAIMABLEMIGRATE_HIGHATOMICMIGRATE_CMAMIGRATE_ISOLATE
  42. seq_printf(m, "Node %4d, zone %8s, type %12s ",
  43. pgdat->node_id,
  44. zone->name,
  45. migratetype_names[mtype]);
  46. for (order = ; order < MAX_ORDER; ++order) {--------------------------------然后按照order递增统计空闲个数。
  47. unsigned long freecount = ;
  48. struct free_area *area;
  49. struct list_head *curr;
  50.  
  51. area = &(zone->free_area[order]);
  52.  
  53. list_for_each(curr, &area->free_list[mtype])
  54. freecount++;
  55. seq_printf(m, "%6lu ", freecount);
  56. }
  57. seq_putc(m, '\n');
  58. }
  59. }
  60.  
  61. /* Print out the free pages at each order for each migratetype */
  62. static int pagetypeinfo_showblockcount(struct seq_file *m, void *arg)
  63. {
  64. int mtype;
  65. pg_data_t *pgdat = (pg_data_t *)arg;
  66.  
  67. seq_printf(m, "\n%-23s", "Number of blocks type ");
  68. for (mtype = ; mtype < MIGRATE_TYPES; mtype++)
  69. seq_printf(m, "%12s ", migratetype_names[mtype]);
  70. seq_putc(m, '\n');
  71. walk_zones_in_node(m, pgdat, pagetypeinfo_showblockcount_print);---------------遍历当前节点的不同zone
  72.  
  73. return ;
  74. }
  75.  
  76. static void pagetypeinfo_showblockcount_print(struct seq_file *m,
  77. pg_data_t *pgdat, struct zone *zone)
  78. {
  79. int mtype;
  80. unsigned long pfn;
  81. unsigned long start_pfn = zone->zone_start_pfn;
  82. unsigned long end_pfn = zone_end_pfn(zone);
  83. unsigned long count[MIGRATE_TYPES] = { , };
  84.  
  85. for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {--------------遍历所有的pageblock,然后按照页面类型进行统计。
  86. struct page *page;
  87.  
  88. if (!pfn_valid(pfn))
  89. continue;
  90.  
  91. page = pfn_to_page(pfn);
  92.  
  93. /* Watch for unexpected holes punched in the memmap */
  94. if (!memmap_valid_within(pfn, page, zone))
  95. continue;
  96.  
  97. mtype = get_pageblock_migratetype(page);
  98.  
  99. if (mtype < MIGRATE_TYPES)
  100. count[mtype]++;
  101. }
  102.  
  103. /* Print counts */
  104. seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name);
  105. for (mtype = ; mtype < MIGRATE_TYPES; mtype++)
  106. seq_printf(m, "%12lu ", count[mtype]);
  107. seq_putc(m, '\n');
  108. }

1.4 /proc/vmstat

/proc/vmstat主要是导出vm_stat[]、vm_numa_stat[]、vm_node_stat[]、的统计信息,对应的字符串信息在vmstat_text[]中;其他信息还包括writeback_stat_item、。

  1. nr_free_pages
  2. nr_zone_inactive_anon
  3. nr_zone_active_anon
  4. nr_zone_inactive_file
  5. nr_zone_active_file
  6. nr_zone_unevictable
  7. nr_zone_write_pending
  8. nr_mlock
  9. nr_page_table_pages
  10. nr_kernel_stack
  11. nr_bounce
  12. nr_zspages
  13. nr_free_cma
  14. numa_hit
  15. numa_miss
  16. numa_foreign
  17. numa_interleave
  18. numa_local
  19. numa_other
  20. ...

/proc/vmstat对应的文件操作函数为vmstat_file_operations

vmstat_start()中获取各参数到v[]中,里面的数值和vmstat_text[]里的字符一一对应。

然后在vmstat_show()中一条一条打印出来。

  1. const char * const vmstat_text[] = {
  2. /* enum zone_stat_item countes */
  3. "nr_free_pages",
  4. "nr_zone_inactive_anon",
  5. "nr_zone_active_anon",
  6. "nr_zone_inactive_file",
  7. "nr_zone_active_file",
  8. "nr_zone_unevictable",
  9. "nr_zone_write_pending",
  10. "nr_mlock",
  11. "nr_page_table_pages",
  12. "nr_kernel_stack",
  13. "nr_bounce",
  14. ...
  15. };
  16.  
  17. static void *vmstat_start(struct seq_file *m, loff_t *pos)
  18. {
  19. unsigned long *v;
  20. int i, stat_items_size;
  21.  
  22. if (*pos >= ARRAY_SIZE(vmstat_text))
  23. return NULL;
  24. stat_items_size = NR_VM_ZONE_STAT_ITEMS * sizeof(unsigned long) +
  25. NR_VM_NUMA_STAT_ITEMS * sizeof(unsigned long) +
  26. NR_VM_NODE_STAT_ITEMS * sizeof(unsigned long) +
  27. NR_VM_WRITEBACK_STAT_ITEMS * sizeof(unsigned long);
  28.  
  29. #ifdef CONFIG_VM_EVENT_COUNTERS
  30. stat_items_size += sizeof(struct vm_event_state);
  31. #endif
  32.  
  33. v = kmalloc(stat_items_size, GFP_KERNEL);
  34. m->private = v;
  35. if (!v)
  36. return ERR_PTR(-ENOMEM);
  37. for (i = ; i < NR_VM_ZONE_STAT_ITEMS; i++)
  38. v[i] = global_zone_page_state(i);
  39. v += NR_VM_ZONE_STAT_ITEMS;
  40.  
  41. #ifdef CONFIG_NUMA
  42. for (i = ; i < NR_VM_NUMA_STAT_ITEMS; i++)
  43. v[i] = global_numa_state(i);
  44. v += NR_VM_NUMA_STAT_ITEMS;
  45. #endif
  46.  
  47. for (i = ; i < NR_VM_NODE_STAT_ITEMS; i++)
  48. v[i] = global_node_page_state(i);
  49. v += NR_VM_NODE_STAT_ITEMS;
  50.  
  51. global_dirty_limits(v + NR_DIRTY_BG_THRESHOLD,
  52. v + NR_DIRTY_THRESHOLD);
  53. v += NR_VM_WRITEBACK_STAT_ITEMS;
  54.  
  55. #ifdef CONFIG_VM_EVENT_COUNTERS
  56. all_vm_events(v);
  57. v[PGPGIN] /= ; /* sectors -> kbytes */
  58. v[PGPGOUT] /= ;
  59. #endif
  60. return (unsigned long *)m->private + *pos;
  61. }
  62.  
  63. static int vmstat_show(struct seq_file *m, void *arg)
  64. {
  65. unsigned long *l = arg;
  66. unsigned long off = l - (unsigned long *)m->private;
  67.  
  68. seq_puts(m, vmstat_text[off]);
  69. seq_put_decimal_ull(m, " ", *l);
  70. seq_putc(m, '\n');
  71. return ;
  72. }
  73.  
  74. static const struct seq_operations vmstat_op = {
  75. .start =vmstat_start,
  76. .next = vmstat_next,
  77. .stop = vmstat_stop,
  78. .show =vmstat_show,
  79. };
  80.  
  81. static int vmstat_open(struct inode *inode, struct file *file)
  82. {
  83. return seq_open(file, &vmstat_op);
  84. }
  85.  
  86. static const struct file_operations vmstat_file_operations = {
  87. .open =vmstat_open,
  88. .read = seq_read,
  89. .llseek = seq_lseek,
  90. .release = seq_release,
  91. };

1.5 /proc/vmallocinfo

提供vmalloc以及map区域相关信息,一块区域一行信息。

  1. 0xffffaeec00000000-0xffffaeec00002000 acpi_os_map_iomem+0x17c/0x1b0 phys=0x0000000077fe9000 ioremap
  2. 0xffffaeec00002000-0xffffaeec00004000 acpi_os_map_iomem+0x17c/0x1b0 phys=0x0000000077faa000 ioremap
  3. 0xffffaeec00004000-0xffffaeec00006000 acpi_os_map_iomem+0x17c/0x1b0 phys=0x0000000077ffd000 ioremap
  4. ...
  5. 0xffffaeec00043000-0xffffaeec00045000 acpi_os_map_iomem+0x17c/0x1b0 phys=0x0000000077fcb000 ioremap
  6. 0xffffaeec00045000-0xffffaeec00047000 acpi_os_map_iomem+0x17c/0x1b0 phys=0x0000000077fe4000 ioremap
  7. 0xffffaeec00047000-0xffffaeec00049000 acpi_os_map_iomem+0x17c/0x1b0 phys=0x0000000077fee000 ioremap
  8. 0xffffaeec00049000-0xffffaeec0004b000 pci_iomap_range+0x63/0x80 phys=0x000000009432d000 ioremap
  9. 0xffffaeec0004b000-0xffffaeec0004d000 acpi_os_map_iomem+0x17c/0x1b0 phys=0x0000000077fc3000 ioremap
  10. ...
    0xffffaeec00c65000-0xffffaeec00c86000  135168 alloc_large_system_hash+0x19c/0x259 pages=32 vmalloc N0=32

/proc/vmallocinfo调用vmalloc_open()来遍历vmap_area_list,在s_show()中显示每个区域信息。

从下面的s_show()可知,第一列是区域虚拟地址起点终点,第二列是区域的大小,第三列是调用者,第四列是对应的页面数量(如果有的话),第五列是物理地址,第六列是区域类型,最后节点序号。

  1. static int s_show(struct seq_file *m, void *p)
  2. {
  3. struct vmap_area *va = p;
  4. struct vm_struct *v;
  5.  
  6. /*
  7. * s_show can encounter race with remove_vm_area, !VM_VM_AREA on
  8. * behalf of vmap area is being tear down or vm_map_ram allocation.
  9. */
  10. if (!(va->flags & VM_VM_AREA))
  11. return ;
  12.  
  13. v = va->vm;
  14.  
  15. seq_printf(m, "0x%pK-0x%pK %7ld",
  16. v->addr, v->addr + v->size, v->size);
  17.  
  18. if (v->caller)
  19. seq_printf(m, " %pS", v->caller);
  20.  
  21. if (v->nr_pages)
  22. seq_printf(m, " pages=%d", v->nr_pages);
  23.  
  24. if (v->phys_addr)
  25. seq_printf(m, " phys=%llx", (unsigned long long)v->phys_addr);
  26.  
  27. if (v->flags & VM_IOREMAP)
  28. seq_puts(m, " ioremap");
  29.  
  30. if (v->flags & VM_ALLOC)
  31. seq_puts(m, " vmalloc");
  32.  
  33. if (v->flags & VM_MAP)
  34. seq_puts(m, " vmap");
  35.  
  36. if (v->flags & VM_USERMAP)
  37. seq_puts(m, " user");
  38.  
  39. if (v->flags & VM_VPAGES)
  40. seq_puts(m, " vpages");
  41.  
  42. show_numa_info(m, v);
  43. seq_putc(m, '\n');
  44. return ;
  45. }
  46.  
  47. static const struct seq_operations vmalloc_op = {
  48. .start = s_start,
  49. .next = s_next,
  50. .stop = s_stop,
  51. .show =s_show,
  52. };
  53.  
  54. static int vmalloc_open(struct inode *inode, struct file *file)
  55. {
  56. if (IS_ENABLED(CONFIG_NUMA))
  57. return seq_open_private(file, &vmalloc_op,
  58. nr_node_ids * sizeof(unsigned int));
  59. else
  60. return seq_open(file, &vmalloc_op);
  61. }

1.6 /proc/self/statm、maps

1.6.1 /proc/self/statm

每个进程都有自己的statm,statm显示当前进程的内存使用情况,以page为单位。

statm一共7项,分别解释如下:

size:进程虚拟地址空间的大小。

resident:应用程序占用的物理内存大小。

shared:共享页面大小。

text:代码段占用的大小。

lib:为0。

data:data_vm+stack_vm占用的大小。

dt:脏页,为0。

/proc/self/statm的核心函数是proc_pid_statm(),通过task_statm()获取相关参数,然后打印。

  1. int proc_pid_statm(struct seq_file *m, struct pid_namespace *ns,
  2. struct pid *pid, struct task_struct *task)
  3. {
  4. unsigned long size = , resident = , shared = , text = , data = ;
  5. struct mm_struct *mm = get_task_mm(task);
  6.  
  7. if (mm) {
  8. size = task_statm(mm, &shared, &text, &data, &resident);
  9. mmput(mm);
  10. }
  11. /*
  12. * For quick read, open code by putting numbers directly
  13. * expected format is
  14. * seq_printf(m, "%lu %lu %lu %lu 0 %lu 0\n",
  15. * size, resident, shared, text, data);
  16. */
  17. seq_put_decimal_ull(m, "", size);
  18. seq_put_decimal_ull(m, " ", resident);
  19. seq_put_decimal_ull(m, " ", shared);
  20. seq_put_decimal_ull(m, " ", text);
  21. seq_put_decimal_ull(m, " ", );
  22. seq_put_decimal_ull(m, " ", data);
  23. seq_put_decimal_ull(m, " ", );
  24. seq_putc(m, '\n');
  25.  
  26. return ;
  27. }
  28.  
  29. unsigned long task_statm(struct mm_struct *mm,
  30. unsigned long *shared, unsigned long *text,
  31. unsigned long *data, unsigned long *resident)
  32. {
  33. *shared = get_mm_counter(mm, MM_FILEPAGES) +
  34. get_mm_counter(mm, MM_SHMEMPAGES);
  35. *text = (PAGE_ALIGN(mm->end_code) - (mm->start_code & PAGE_MASK))
  36. >> PAGE_SHIFT;
  37. *data = mm->data_vm + mm->stack_vm;
  38. *resident = *shared + get_mm_counter(mm, MM_ANONPAGES);
  39. return mm->total_vm;
  40. }

1.6.2 /proc/self/maps

maps显示当前进程各虚拟地址段的属性,包括虚拟地址段的起始终止地址、读写执行属性、vm_pgoff、主从设备号、i_ino、文件名。

  1. 6212616d000- r-xp : /bin/cat--------------------------只读、可执行,一般是代码段的位置。
  2. - r--p : /bin/cat-------------------------只读属性、不可执行。
  3. - rw-p : /bin/cat-------------------------读写、不可执行。
  4. 562126f5b000-562126f7c000 rw-p : [heap]
  5. 7fd5423d5000-7fd542da4000 r--p : /usr/lib/locale/locale-archive
  6. 7fd542da4000-7fd542f8b000 r-xp : /lib/x86_64-linux-gnu/libc-2.27.so
  7. 7fd542f8b000-7fd54318b000 ---p 001e7000 : /lib/x86_64-linux-gnu/libc-2.27.so
  8. 7fd54318b000-7fd54318f000 r--p 001e7000 : /lib/x86_64-linux-gnu/libc-2.27.so
  9. 7fd54318f000-7fd543191000 rw-p 001eb000 : /lib/x86_64-linux-gnu/libc-2.27.so
  10. 7fd543191000-7fd543195000 rw-p :
  11. 7fd543195000-7fd5431bc000 r-xp : /lib/x86_64-linux-gnu/ld-2.27.so
  12. 7fd54338d000-7fd54338f000 rw-p :
  13. 7fd54339a000-7fd5433bc000 rw-p :
  14. 7fd5433bc000-7fd5433bd000 r--p : /lib/x86_64-linux-gnu/ld-2.27.so
  15. 7fd5433bd000-7fd5433be000 rw-p : /lib/x86_64-linux-gnu/ld-2.27.so
  16. 7fd5433be000-7fd5433bf000 rw-p :
  17. 7ffe3ab8a000-7ffe3abab000 rw-p : [stack]
  18. 7ffe3abd5000-7ffe3abd8000 r--p : [vvar]
  19. 7ffe3abd8000-7ffe3abda000 r-xp : [vdso]
  20. ffffffffff600000-ffffffffff601000 r-xp : [vsyscall]

首先要遍历当前进程的所有vma,然后show_map_vma()显示每个vma的详细信息。

vdso的全称是虚拟动态共享库(virtual dynamic shared library),而vsyscall的全称是虚拟系统调用(virtual system call)。

  1. static void
  2. show_map_vma(struct seq_file *m, struct vm_area_struct *vma, int is_pid)
  3. {
  4. struct mm_struct *mm = vma->vm_mm;
  5. struct file *file = vma->vm_file;
  6. vm_flags_t flags = vma->vm_flags;
  7. unsigned long ino = ;
  8. unsigned long long pgoff = ;
  9. unsigned long start, end;
  10. dev_t dev = ;
  11. const char *name = NULL;
  12.  
  13. if (file) {
  14. struct inode *inode = file_inode(vma->vm_file);
  15. dev = inode->i_sb->s_dev;
  16. ino = inode->i_ino;
  17. pgoff = ((loff_t)vma->vm_pgoff) << PAGE_SHIFT;------------------------------是这个vma的第一页在地址空间里是第几页。
  18. }
  19.  
  20. start = vma->vm_start;
  21. end = vma->vm_end;
  22. show_vma_header_prefix(m, start, end, flags, pgoff, dev, ino);
  23.  
  24. /*
  25. * Print the dentry name for named mappings, and a
  26. * special [heap] marker for the heap:
  27. */
  28. if (file) {---------------------------------------------------------------------如果vm_file是文件,显示其路径。
  29. seq_pad(m, ' ');
  30. seq_file_path(m, file, "\n");
  31. goto done;
  32. }
  33.  
  34. if (vma->vm_ops && vma->vm_ops->name) {
  35. name = vma->vm_ops->name(vma);
  36. if (name)
  37. goto done;
  38. }
  39.  
  40. name = arch_vma_name(vma);
  41. if (!name) {
  42. if (!mm) {------------------------------------------------------------------不是文件但是,namemm都不为空,名称为vdso
  43. name = "[vdso]";
  44. goto done;
  45. }
  46.  
  47. if (vma->vm_start <= mm->brk &&
  48. vma->vm_end >= mm->start_brk) {
  49. name = "[heap]";
  50. goto done;
  51. }
  52.  
  53. if (is_stack(vma))
  54. name = "[stack]";
  55. }
  56.  
  57. done:
  58. if (name) {
  59. seq_pad(m, ' ');
  60. seq_puts(m, name);
  61. }
  62. seq_putc(m, '\n');
  63. }
  64.  
  65. static void show_vma_header_prefix(struct seq_file *m,
  66. unsigned long start, unsigned long end,
  67. vm_flags_t flags, unsigned long long pgoff,
  68. dev_t dev, unsigned long ino)
  69. {
  70. seq_setwidth(m, + sizeof(void *) * - );
  71. seq_printf(m, "%08lx-%08lx %c%c%c%c %08llx %02x:%02x %lu ",
  72. start,
  73. end,
  74. flags & VM_READ ? 'r' : '-',
  75. flags & VM_WRITE ? 'w' : '-',
  76. flags & VM_EXEC ? 'x' : '-',
  77. flags & VM_MAYSHARE ? 's' : 'p',
  78. pgoff,
  79. MAJOR(dev), MINOR(dev), ino);
  80. }

2. vm参数

2.1 /proc/sys/vm/highmem_is_dirtyable

首先highmem_is_dirtyable只有在CONFIG_HIGHMEM定义的情况下,才有效。

默认为0,即在计算dirty_ratio和dirty_background_ratio的时候只考虑low mem。当打开之后才会将highmem也计算在内。

2.2 /proc/sys/vm/legacy_va_layout

默认为0,即使用32位mmap分层,否则使用2.4内核的分层。

2.3 /proc/sys/vm/lowmem_reserve_ratio

lowmem_reserve_ratio是防止highmem内存在不充裕情况下,过度借用低端内存。

lowmem_reserve_ratio决定了每个zone保留多少数目的页面。

sysctl_lowmem_reserve_ratio中定义了不同zone的预留比例,值越大保留比例越小。如,DMA为1/256,NORMAL为1/32。

  1. int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES-] = {
  2. #ifdef CONFIG_ZONE_DMA
  3. ,
  4. #endif
  5. #ifdef CONFIG_ZONE_DMA32
  6. ,
  7. #endif
  8. #ifdef CONFIG_HIGHMEM
  9. ,
  10. #endif
  11. ,
  12. };
  13.  
  14. static void setup_per_zone_lowmem_reserve(void)
  15. {
  16. struct pglist_data *pgdat;
  17. enum zone_type j, idx;
  18.  
  19. for_each_online_pgdat(pgdat) {
  20. for (j = ; j < MAX_NR_ZONES; j++) {------------------------------------------这里供ZONE_DMAZONE_NORMALZONE_MOVABLE三个zone
  21. struct zone *zone = pgdat->node_zones + j;
  22. unsigned long managed_pages = zone->managed_pages;------------------------当前zone伙伴系统管理的页面数目
  23.  
  24. zone->lowmem_reserve[j] = ;
  25.  
  26. idx = j;
  27. while (idx) {-------------------------------------------------------------遍历低于当前zonezone
  28. struct zone *lower_zone;
  29.  
  30. idx--;----------------------------------------------------------------注意下面idxj的区别,j表示当前zoneidx表示lower zone
  31.  
  32. if (sysctl_lowmem_reserve_ratio[idx] < )-----------------------------最低不小于1,不可能预留超过内存总量的大小。
  33. sysctl_lowmem_reserve_ratio[idx] = ;
  34.  
  35. lower_zone = pgdat->node_zones + idx;
  36. lower_zone->lowmem_reserve[j] = managed_pages /
  37. sysctl_lowmem_reserve_ratio[idx];----------------------------------更新lower zone的关于当前zonelowmem_reserve
  38. managed_pages += lower_zone->managed_pages;----------------------------managed_pages累加
  39. }
  40. }
  41. }
  42.  
  43. /* update totalreserve_pages */
  44. calculate_totalreserve_pages();----------------------------------------------------更新totalreserve_pages
  45. }

2.4 /proc/sys/vm/max_map_count 、/proc/sys/vm/mmap_min_addr

max_map_count规定了mmap区域的最大数目,默认值是65536。

mmap_min_addr规定了用于进程mmap的最小空间大小,默认是4096。

2.5 /proc/sys/vm/min_free_kbytes

min_free_kbytes是强制系统lowmem保持最低限度的空闲内存大小,这个值用于计算WMARK_MIN水位。

如果设置过低,可能造成系统在高负荷下易死锁;如果设置过高,又容易触发OOM机制。

2.6 /proc/sys/vm/stat_interval

VM统计信息的采样周期,默认1秒。

2.7 /proc/sys/vm/vfs_cache_pressure

vfs_cache_pressure用于控制dentry/inode页面回收的倾向性,默认是为100。这里的倾向性是和pagecache/swapcahche回收相对比的。

当vfs_cache_pressure=100,是对两者采取一个平衡的策略。

当vfs_cache_pressure小于100,更倾向于保留dentry/inode类型页面。

当vfs_cache_pressure大于100,更倾向于回收dentry/inode类型页面。

当vfs_cache_pressure为0时,内核不会回收dentry/inode类型页面。

当vfs_cache_pressure远高于100时,可能引起性能回退,因为内存回收会持有很多锁来查找可释放页面。

2.8 /proc/sys/vm/page-cluster

一次从swap分区读取的页面阶数,0表示1页,1表示2页。类似于pagecache的预读取功能。

主要用于提高从swap恢复的读性能。

2. swap

2.1 /proc/swaps

/proc/swaps文件操作函数在proc_swaps_operations。

swap_start()遍历swap_info[]所有swap文件,然后在swap_show()中显示每个swap文件的信息。

  1. static void *swap_start(struct seq_file *swap, loff_t *pos)
  2. {
  3. struct swap_info_struct *si;
  4. int type;
  5. loff_t l = *pos;
  6.  
  7. mutex_lock(&swapon_mutex);
  8.  
  9. if (!l)
  10. return SEQ_START_TOKEN;
  11.  
  12. for (type = ; type < nr_swapfiles; type++) {
  13. smp_rmb(); /* read nr_swapfiles before swap_info[type] */
  14. si = swap_info[type];
  15. if (!(si->flags & SWP_USED) || !si->swap_map)
  16. continue;
  17. if (!--l)
  18. return si;
  19. }
  20.  
  21. return NULL;
  22. }
  23.  
  24. static int swap_show(struct seq_file *swap, void *v)
  25. {
  26. struct swap_info_struct *si = v;
  27. struct file *file;
  28. int len;
  29.  
  30. if (si == SEQ_START_TOKEN) {
  31. seq_puts(swap,"Filename\t\t\t\tType\t\tSize\tUsed\tPriority\n");
  32. return ;
  33. }
  34.  
  35. file = si->swap_file;
  36. len = seq_file_path(swap, file, " \t\n\\");-----------------根据file显示swap文件的名称。
  37. seq_printf(swap, "%*s%s\t%u\t%u\t%d\n",
  38. len < ? - len : , " ",
  39. S_ISBLK(file_inode(file)->i_mode) ?-----------------判断swap文件类型是块设备分区还是一个文件
  40. "partition" : "file\t",
  41. si->pages << (PAGE_SHIFT - ),---------------------以KB为单位的swap总大小
  42. si->inuse_pages << (PAGE_SHIFT - ),---------------以KB为单位的被使用部分大小
  43. si->prio);------------------------------------------swap优先级
  44. return ;
  45. }
  46.  
  47. static const struct seq_operations swaps_op = {
  48. .start =swap_start,
  49. .next = swap_next,
  50. .stop = swap_stop,
  51. .show =swap_show
  52. };

示例如下:

  1. Filename Type Size Used Priority
  2. /dev/sda7 partition -

2.2 /proc/sys/vm/swappiness

3. zone

/proc/zoneinfo

4. slab

/proc/slab_allocators

/proc/slabinfo

slabinfo

5. KSM

/sys/kernel/mm/ksm

6. 页面迁移

/sys/kernel/debug/tracing/events/migrate

7. 内存规整

/proc/sys/vm/compact_memory、/proc/sys/vm/extfrag_threshold

echo 1到compact_memory触发内存规整,extfrag_threshold是内存规整碎片阈值。

两者详情见:compact_memoryextfrag_threshold

/sys/kernel/debug/extfrag

/sys/kernel/debug/tracing/events/compaction

8. OOM

关于OOM的介绍Linux内存管理 (21)OOM

/proc/sys/vm/panic_on_oom

当Kernel遇到OOM的时候,根据panic_on_oom采取行动,有两种:

  • panic_on_oom==2或者1:产生内核Panic
  • panic_on_oom==0:启动OOM选择进程,杀死以释放内存
  1. /*
  2. * Determines whether the kernel must panic because of the panic_on_oom sysctl.
  3. */
  4. void check_panic_on_oom(struct oom_control *oc, enum oom_constraint constraint,
  5. struct mem_cgroup *memcg)
  6. {
  7. if (likely(!sysctl_panic_on_oom))
  8. return;
  9. if (sysctl_panic_on_oom != ) {
  10. /*
  11. * panic_on_oom == 1 only affects CONSTRAINT_NONE, the kernel
  12. * does not panic for cpuset, mempolicy, or memcg allocation
  13. * failures.
  14. */
  15. if (constraint != CONSTRAINT_NONE)
  16. return;
  17. }
  18. /* Do not panic for oom kills triggered by sysrq */
  19. if (is_sysrq_oom(oc))
  20. return;
  21. dump_header(oc, NULL, memcg);
  22. panic("Out of memory: %s panic_on_oom is enabled\n",
  23. sysctl_panic_on_oom == ? "compulsory" : "system-wide");
  24. }

/proc/sys/vm/oom_kill_allocating_task

在触发OOM的情况下,选择杀死哪个进程的策略是有个oom_kill_allocating_task来决定。

  • oom_kill_allocating_task==1:谁触发了OOM就杀死谁
  • oom_kill_allocating_task==0:在系统范围内选择最‘bad'进程杀死

默认情况下该变量为0,如果配置了此值,则当内存被耗尽时,或者内存不足已满足需要分配的内存时,会把当前申请内存分配的进程杀掉。

  1. bool out_of_memory(struct oom_control *oc)
  2. {
  3. ...
  4. if (sysctl_oom_kill_allocating_task && current->mm &&----------------------选择当前进程进行处理
  5. !oom_unkillable_task(current, NULL, oc->nodemask) &&
  6. current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
  7. get_task_struct(current);
  8. oom_kill_process(oc, current, , totalpages, NULL,
  9. "Out of memory (oom_kill_allocating_task)");
  10. return true;
  11. }
  12.  
  13. p = select_bad_process(oc, &points, totalpages);---------------------------在系统范围内选择最'bad'进程进行处理
  14. ...
  15. return true;
  16. }

/proc/sys/vm/oom_dump_tasks

决定在OOM打印的使用是否dump_tasks,oom_dump_tasks==1则打印,否则不打印。

/proc/xxx/oom_score、/proc/xxx/oom_adj、/proc/xxx/oom_score_adj

这三个参数都是具体进程相关的,其中oom_score是只读j。

  1. static const struct pid_entry tid_base_stuff[] = {
  2. ...
  3. ONE("oom_score", S_IRUGO, proc_oom_score),
  4. REG("oom_adj", S_IRUGO|S_IWUSR, proc_oom_adj_operations),
  5. REG("oom_score_adj", S_IRUGO|S_IWUSR, proc_oom_score_adj_operations),
  6. ...
  7. }

oom_score的结果来自于oom_badness,主要来自两部分,一是根据进程内存使用情况打分,另一部分来自于用户打分即oom_score_adj。

如果oom_score_adj为OOM_SCORE_ADJ_MIN的话,就禁止了OOM杀死进程。

oom_adj是一个旧接口参数,取值范围是[-16, 15]。oom_adj通过一定计算转换成oom_score_adj。

oom_score_adj通过用户空间直接写入进程的signal->oom_score_adj。

这三者之间关系简单概述:oom_adj映射到oom_score_adj;oom_score_adj作为一部分计算出oom_score;oom_score才是OOM机制选择'bad'进程的依据。

oom_score_adj和oom_adj的关系

内核首先根据内存使用情况计算出points得分,oom_score_adj的范围是[-1000, 1000],adj的值是将oom_score_adj归一化后乘以totalpages的结果。

如果oom_score_adj为0,则不计入oom_score_adj的影响。

如果oom_score_adj为负数,则最终得分会变小,进程降低被选中可能性。

如果oom_score_adj为正数,则加大被选为'bad'的可能性。

  1. unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg,
  2. const nodemask_t *nodemask, unsigned long totalpages)
  3. {
  4. ...
  5. /* Normalize to oom_score_adj units */
  6. adj *= totalpages / ;
  7. points += adj;
  8. ...
  9. }

oom_adj和oom_score_adj的关系

可以看出oom_ad从区间[-16, 15]j被映射到oom_score_adj区间[-1000, 1000]。

  1. static ssize_t oom_adj_write(struct file *file, const char __user *buf,
  2. size_t count, loff_t *ppos)
  3. {
  4. ...
  5. /*
  6. * Scale /proc/pid/oom_score_adj appropriately ensuring that a maximum
  7. * value is always attainable.
  8. */
  9. if (oom_adj == OOM_ADJUST_MAX)--------------------------------------如果oom_adj等于OOM_ADJUST_MAX,则对应OOM_SCORE_ADJ_MAX
  10. oom_adj = OOM_SCORE_ADJ_MAX;
  11. else
  12. oom_adj = (oom_adj * OOM_SCORE_ADJ_MAX) / -OOM_DISABLE;---------通过公式将旧oom_adj映射到oom_score_adj区间。
  13.  
  14. if (oom_adj < task->signal->oom_score_adj &&
  15. !capable(CAP_SYS_RESOURCE)) {-----------------------------------判断修改权限是否满足CAP_SYS_RESOURCE
  16. err = -EACCES;
  17. goto err_sighand;
  18. }
  19. ...
  20. task->signal->oom_score_adj = oom_adj;------------------------------将从oom_adj转换到oom_score_adj
  21. ...
  22. }

/sys/kernel/debug/tracing/events/oom

参考文档:《Linux vm运行参数之(二):OOM相关的参数

9. Overcommit

参考文档:《理解LINUX的MEMORY OVERCOMMIT

当进程需要内存时,进程从内核获得的仅仅是一段虚拟地址的使用权,而不是实际的物理内存。

实际的物理内存只有当进程真的去访问时,产生缺页异常,从而进入分配实际物理内存的分配。

看起来虚拟内存和物理内存分配被分割开了,虚拟内存分配超过物理内存的限制,这种情况成为Overcommit。

相关参数初始化:

  1. int sysctl_overcommit_memory = OVERCOMMIT_GUESS; /* heuristic overcommit */
  2. int sysctl_overcommit_ratio = ; /* default is 50% */
  3. unsigned long sysctl_overcommit_kbytes __read_mostly;
  4. unsigned long sysctl_user_reserve_kbytes __read_mostly = 1UL << ; /* 128MB */
  5. unsigned long sysctl_admin_reserve_kbytes __read_mostly = 1UL << ; /* 8MB */

9.1 /proc/sys/vm/overcommit_memory

关于Overcommit的策略有三种:

#define OVERCOMMIT_GUESS 0---------让内核根据自己当前状况进行判断。
#define OVERCOMMIT_ALWAYS 1-------不限制Overcommit,无论进程申请多少虚拟地址空间。
#define OVERCOMMIT_NEVER 2---------不允许Overcommit,会根据overcommit_ration计算出一个overcommit阈值。

overcommit_memory ==0,系统默认设置,释放较少物理内存,使得oom-kill机制运作比较明显。

Heuristic overcommit handling. 这是缺省值,它允许overcommit,但过于明目张胆的overcommit会被拒绝,比如malloc一次性申请的内存大小就超过了系统总内存。

Heuristic的意思是“试探式的”,内核利用某种算法猜测你的内存申请是否合理,它认为不合理就会拒绝overcommit。

overcommit_memory == 1,会从buffer中释放较多物理内存,oom-kill也会继续起作用;

允许overcommit,对内存申请来者不拒。

overcommit_memory == 2,物理内存使用完后,打开任意一个程序均显示内存不足;

禁止overcommit。CommitLimit 就是overcommit的阈值,申请的内存总数超过CommitLimit的话就算是overcommit。

也就是说,如果overcommit_memory==2时,内存耗尽时,oom-kill是不会起作用的,系统不会再打开其他程序了,只有等待正在运行的进程释放内存。

  1. int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
  2. {
  3. long free, allowed, reserve;
  4.  
  5. VM_WARN_ONCE(percpu_counter_read(&vm_committed_as) <
  6. -(s64)vm_committed_as_batch * num_online_cpus(),
  7. "memory commitment underflow");
  8.  
  9. vm_acct_memory(pages);
  10.  
  11. /*
  12. * Sometimes we want to use more memory than we have
  13. */
  14. if (sysctl_overcommit_memory == OVERCOMMIT_ALWAYS)-----------------------------------OVERCOMMIT_ALWAYS不会对内存申请做限制。
  15. return ;
  16.  
  17. if (sysctl_overcommit_memory == OVERCOMMIT_GUESS) {----------------------------------OVERCOMMIT_GUESS情况下对内存申请处理。
  18. free = global_page_state(NR_FREE_PAGES);
  19. free += global_page_state(NR_FILE_PAGES);
  20.  
  21. /*
  22. * shmem pages shouldn't be counted as free in this
  23. * case, they can't be purged, only swapped out, and
  24. * that won't affect the overall amount of available
  25. * memory in the system.
  26. */
  27. free -= global_page_state(NR_SHMEM);
  28.  
  29. free += get_nr_swap_pages();
  30.  
  31. /*
  32. * Any slabs which are created with the
  33. * SLAB_RECLAIM_ACCOUNT flag claim to have contents
  34. * which are reclaimable, under pressure. The dentry
  35. * cache and most inode caches should fall into this
  36. */
  37. free += global_page_state(NR_SLAB_RECLAIMABLE);
  38.  
  39. /*
  40. * Leave reserved pages. The pages are not for anonymous pages.
  41. */
  42. if (free <= totalreserve_pages)
  43. goto error;
  44. else
  45. free -= totalreserve_pages;
  46.  
  47. /*
  48. * Reserve some for root
  49. */
  50. if (!cap_sys_admin)
  51. free -= sysctl_admin_reserve_kbytes >> (PAGE_SHIFT - );
  52.  
  53. if (free > pages)
  54. return ;
  55.  
  56. goto error;
  57. }
  58.  
  59. allowed =vm_commit_limit();
  60. /*
  61. * Reserve some for root
  62. */
  63. if (!cap_sys_admin)
  64. allowed -= sysctl_admin_reserve_kbytes >> (PAGE_SHIFT - );
  65.  
  66. /*
  67. * Don't let a single process grow so big a user can't recover
  68. */
  69. if (mm) {
  70. reserve = sysctl_user_reserve_kbytes >> (PAGE_SHIFT - );
  71. allowed -= min_t(long, mm->total_vm / , reserve);
  72. }
  73.  
  74. if (percpu_counter_read_positive(&vm_committed_as) < allowed)
  75. return ;
  76. error:
  77. vm_unacct_memory(pages);
  78.  
  79. return -ENOMEM;
  80. }

9.2 /proc/sys/vm/overcommit_kbytes、/proc/sys/vm/overcommit_ratio

在overcommit_memory被设置为OVERCOMMIT_GUESS 和OVERCOMMIT_NEVER的情况下,计算Overcommit的允许量。

  1. unsigned long vm_commit_limit(void)
  2. {
  3. unsigned long allowed;
  4.  
  5. if (sysctl_overcommit_kbytes)
  6. allowed = sysctl_overcommit_kbytes >> (PAGE_SHIFT - );
  7. else
  8. allowed = ((totalram_pages - hugetlb_total_pages())
  9. * sysctl_overcommit_ratio / );
  10. allowed += total_swap_pages;
  11.  
  12. return allowed;
  13. }

/proc/sys/vm/admin_reserve_kbytes、/proc/sys/vm/user_reserve_kbytes

分别为root用户和普通用户保留操作需要的的内存。

参考文档:《Linux vm运行参数之(一):overcommit相关的参数

/sys/kernel/debug/memblock

/sys/kernel/debug/tracing/events/kmem

/sys/kernel/debug/tracing/events/pagemap

/sys/kernel/debug/tracing/events/skb

/sys/kernel/debug/tracing/events/vmscan

block_dump

10. 文件缓存回写

/proc/sys/vm/dirty_background_bytes

/proc/sys/vm/dirty_background_ratio

/proc/sys/vm/dirty_bytes

/proc/sys/vm/dirty_ratio

/proc/sys/vm/dirty_expire_centisecs

脏数据的超时时间,超过这个时间的脏数据将会马上放入会写队列,单位是百分之一秒,默认值是30秒。

  1. /*
  2. * The longest time for which data is allowed to remain dirty
  3. */
  4. unsigned int dirty_expire_interval = 30 * 100; /* centiseconds */

/proc/sys/vm/dirty_writeback_centisecs

回写现成的循环周期,默认5秒。

  1. /*
  2. * The interval between `kupdate'-style writebacks
  3. */
  4. unsigned int dirty_writeback_interval = * ; /* centiseconds */

/proc/sys/vm/dirtytime_expire_seconds

/proc/sys/vm/drop_caches

drop_caches会一系列页面回收操作,注意只丢弃clean caches,包括可回收slab对象(包括dentry/inode)和文件缓存页面。

echo 1 > /proc/sys/vm/drop_caches------------------释放pagecache页面

echo 2 > /proc/sys/vm/drop_caches------------------释放可回收slab对象,包括dentry和inode

echo 3 > /proc/sys/vm/drop_caches------------------释放前两者之和

由于drop_caches只是放clean caches,如果想释放更多内存,需要先执行sync进行文件系统同步。这样就会最小化脏页数量,并且创造了更多的可drop的clean caches。

操作drop_caches可能会造成性能问题,因为被丢弃的内容,可能会被立即需要,从而产生大量的I/O和CPU负荷。

Linux内存管理 (25)内存sysfs节点解读的更多相关文章

  1. Linux进程管理——查看内存的工具

    Linux进程管理——查看内存的工具 一查看内存的工具vmstat vmstat命令:虚拟内存信息vmstat [options] [delay [count]]vmstat 2 5 [root@ce ...

  2. SAP专家培训之Netweaver ABAP内存管理和内存调优最佳实践

    培训者:SAP成都研究院开发人员Jerry Wang 1. Understanding Memory Objects in ABAP Note1: DATA itab WITH HEADER LINE ...

  3. [内存管理]linux内存管理 之 内存节点和内存分区

    Linux支持多种硬件体系结构,因此Linux必须采用通用的方法来描述内存,以方便对内存进行管理.为此,Linux有了内存节点.内存区.页框的概念,这些概念也是一目了然的. 内存节点:主要依据CPU访 ...

  4. Linux内存管理 (16)内存规整

    专题:Linux内存管理专题 关键词:内存规整.页面迁移.pageblock.MIGRATE_TYPES. 内存碎片的产生:伙伴系统以页为单位进行管理,经过大量申请释放,造成大量离散且不连续的页面.这 ...

  5. Linux内存管理 (22)内存检测技术(slub_debug/kmemleak/kasan)

    专题:Linux内存管理专题 关键词:slub_debug.kmemleak.kasan.oob.Redzone.Padding. Linux常见的内存访问错误有: 越界访问(out of bound ...

  6. Linux内存管理 (22)内存检测技术(slub_debug/kmemleak/kasan)【转】

    转自:https://www.cnblogs.com/arnoldlu/p/8568090.html 专题:Linux内存管理专题 关键词:slub_debug.kmemleak.kasan.oob. ...

  7. linux 进程管理和内存分配

    1.进程相关概念 进程:正在运行中的程序 内核功用:进程管理.文件系统.网络功能.内存管理.驱动程序.安全功能等 Process:运行中的程序的一个副本,是被载入内存的一个指令集合 进程 ID(Pro ...

  8. [内存管理]连续内存分配器(CMA)概述

    作者:Younger Liu, 本作品采用知识共享署名-非商业性使用-相同方式共享 3.0 未本地化版本许可协议进行许可. 原文地址:http://lwn.net/Articles/396657/ 1 ...

  9. davlik虚拟机内存管理之一——内存分配

    转载自http://www.miui.com/thread-74715-1-1.html dalvik虚拟机是Google在Android平台上的Java虚拟机的实现,内存管理是dalvik虚拟机中的 ...

随机推荐

  1. vnc server的安装

    vnc是一款使用广泛的服务器管理软件,可以实现图形化管理.我在安装vnc server碰到一些问题,也整理下我的安装步骤,希望对博友们有一些帮助. 1 安装对应的软件包 [root@centos6 ~ ...

  2. 通过 Ansible 安装 Docker

    本文的演示环境为 ubuntu 16.04. 先在 Ansible Galaxy 搜索 docker,由 geerlingguy 贡献的 docker role 是目前最受欢迎的: 通过 ansibl ...

  3. Docker最全教程——从理论到实战(二)

    上篇内容链接: https://www.cnblogs.com/codelove/p/10030439.html Docker和ASP.NET Core Docker 正在逐渐成为容器行业的事实标准, ...

  4. git第三节----git status与git diff

    @ git status主要检索本地仓库的文件更新状态 @ git diff 主要是查看文件更新的具体内容 首先我们需要了解下文件状态类型,分为以追踪文件和未追踪文件 已追踪文件:在仓库之前的版本快照 ...

  5. Maven教程(4)--Maven管理Oracle驱动包

    由于Oracle授权问题,Maven3不提供Oracle JDBC driver,为了在Maven项目中应用Oracle JDBC driver,必须手动添加到本地仓库. 手动添加到本地仓库需要本地有 ...

  6. lua的String

    基础字符串函数 字符串库中有一些函数非常简单,如:    1). string.len(s) 返回字符串s的长度:    2). string.rep(s,n) 返回字符串s重复n次的结果:    3 ...

  7. 内存管理-MRC与ARC详解

    Objective-C提供了两种内存管理机制MRC(Mannul Reference Counting)和ARC(Automatic Reference Counting),为Objective-C提 ...

  8. Html5 localStorage 缓存

    1.客户端页面临时存贮数据变化多段,cookie ,session, data-xxx , hidden input 这些司空见惯不废话,我们采用 localStorage 特点:1.数据不会删除,除 ...

  9. 【苹果通知APNs】不知道大家用过PushSharp没?

    好久没写东西了,近期在研究Jenkins,大家有兴趣可以一起来玩玩交流,学习DevOps还是蛮重要. 近期我负责的项目里需要APNs的通知,这个自己单独开发还是蛮费功夫,故用了第三方开源的PushSh ...

  10. Paypal 支付功能的 C# .NET / JS 实现

    说明 最近用到了 Paypal 支付功能,英语一般般的我也不得不硬着头皮踩一踩这样的坑.经过近乎半个月的作,终于实现了简单的支付功能,那么首先就说说使用 Paypal 必定要知道的几点(当前日期 20 ...