Cloudera Certified Associate Administrator案例之Test

                                      作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

一.准备工作(将CM升级到"60天使用的企业版")

1>.在CM界面中点击"试用Cloudera Enterprise 60天"

2>.进入许可证界面可以看到当前使用的是"Cloudera Express",点击"试用Cloudera Enterprise 60天""

3>.点击确认

4>.进入升级向导,点击"继续"

5>.升级完成

6>.查看CM主界面

二.使用企业级的CM的快照功能

1>.点击HDFS中的"文件浏览器"

2>.进入我们的测试目录

3>.点击启用快照

4>.弹出一个确认对话框,点击"启用快照"

5>.快照启用成功

6>.点击拍摄快照

7>.给快照起一个名字

8>.等待快照创建完毕

9>.快照创建成功

19>.彻底删除做了快照的文件

  1. [root@node101.yinzhengjie.org.cn ~]# hdfs dfs -ls /yinzhengjie/debug/hdfs/log
  2. Found items
  3. -rw-r--r-- root supergroup -- : /yinzhengjie/debug/hdfs/log/timestamp_1560583829
  4. [root@node101.yinzhengjie.org.cn ~]#
  5. [root@node101.yinzhengjie.org.cn ~]#
  6. [root@node101.yinzhengjie.org.cn ~]# hdfs dfs -rm -skipTrash /yinzhengjie/debug/hdfs/log/timestamp_1560583829
  7. Deleted /yinzhengjie/debug/hdfs/log/timestamp_1560583829
  8. [root@node101.yinzhengjie.org.cn ~]#
  9. [root@node101.yinzhengjie.org.cn ~]# hdfs dfs -ls /yinzhengjie/debug/hdfs/log
  10. [root@node101.yinzhengjie.org.cn ~]#

[root@node101.yinzhengjie.org.cn ~]# hdfs dfs -rm -skipTrash /yinzhengjie/debug/hdfs/log/timestamp_1560583829    #会跳过回收站

三.使用最近一个快照恢复数据

  1. 问题描述:
  2.   公司某用户在HDFS上存放了重要的文件,但是不小心将其删除了。幸运的是,该目录被设置为可快照的,并曾经创建过一次快照。请使用最近的一个快照回复数据。
      要求恢复"/yinzhengjie/debug/hdfs/log"目录下的所有文件,并恢复文件原有的权限,所有者,ACL等。
  3.  
  4. 解决方案:
      快照在操作中日常运维中也是很有用的,不单是用于测试。我之前在博客中有介绍过Hadoop2.9.2版本是如何使用命令行的管理快照的方法,本次我们使用CM来操作。

1>.点击HDFS服务

2>.点击文件浏览器

3>.进入我们要还原数据的目录,并点击"从快照还原目录"

4>.选择快照及恢复的方法 

5>.恢复完成,点击"关闭"

6>.刷新当前页面,发现数据恢复成功啦

7>.恢复文件权限

四.运行一个mapreduce进程

  1. 问题描述:
  2.   公司一个运维人员尝试优化集群,但反而使得一些以前可以运行的MapReduce作业不能运行了。请你识别问题并予以纠正,并成功运行性能测试,要求为在Linux文件系统上找到hadoop-mapreduce-examples.jar包,并使用它完成三步测试:
  3.     >.使用teragen  /user/yinzhengjie/data/day001/test_input 生成10000000行测试记录并输出到指定目录     
  4.     >.使用terasort /user/yinzhengjie/data/day001/test_input /user/yinzhengjie/data/day001/test_output 进行排序并输出到指定目录     
  5.     >.使用teravalidate /user/yinzhengjie/data/day001/test_output /user/yinzhengjie/data/day001/ts_validate检查输出结果
  6.  
  7. 解决方案:   
  8.    需要对MapReduce作业的常见错误会排查。按照上述操作执行即可,遇到问题自行处理。

1>.生成输入数据

  1. [root@node101.yinzhengjie.org.cn ~]# find / -name hadoop-mapreduce-examples.jar
  2. /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
  3. [root@node101.yinzhengjie.org.cn ~]#
  4. [root@node101.yinzhengjie.org.cn ~]# cd /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce
  5. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]#
  6. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]# hadoop jar hadoop-mapreduce-examples.jar teragen /user/yinzhengjie/data/day001/test_input
  1. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]# hadoop jar hadoop-mapreduce-examples.jar teragen /user/yinzhengjie/data/day001/test_input
  2. // :: INFO terasort.TeraGen: Generating using
  3. // :: INFO mapreduce.JobSubmitter: number of splits:
  4. // :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1558520562958_0001
  5. // :: INFO impl.YarnClientImpl: Submitted application application_1558520562958_0001
  6. // :: INFO mapreduce.Job: The url to track the job: http://node101.yinzhengjie.org.cn:8088/proxy/application_1558520562958_0001/
  7. // :: INFO mapreduce.Job: Running job: job_1558520562958_0001
  8. // :: INFO mapreduce.Job: Job job_1558520562958_0001 running in uber mode : false
  9. // :: INFO mapreduce.Job: map % reduce %
  10. // :: INFO mapreduce.Job: map % reduce %
  11. // :: INFO mapreduce.Job: map % reduce %
  12. // :: INFO mapreduce.Job: Job job_1558520562958_0001 completed successfully
  13. // :: INFO mapreduce.Job: Counters:
  14. File System Counters
  15. FILE: Number of bytes read=
  16. FILE: Number of bytes written=
  17. FILE: Number of read operations=
  18. FILE: Number of large read operations=
  19. FILE: Number of write operations=
  20. HDFS: Number of bytes read=
  21. HDFS: Number of bytes written=
  22. HDFS: Number of read operations=
  23. HDFS: Number of large read operations=
  24. HDFS: Number of write operations=
  25. Job Counters
  26. Launched map tasks=
  27. Other local map tasks=
  28. Total time spent by all maps in occupied slots (ms)=
  29. Total time spent by all reduces in occupied slots (ms)=
  30. Total time spent by all map tasks (ms)=
  31. Total vcore-milliseconds taken by all map tasks=
  32. Total megabyte-milliseconds taken by all map tasks=
  33. Map-Reduce Framework
  34. Map input records=
  35. Map output records=
  36. Input split bytes=
  37. Spilled Records=
  38. Failed Shuffles=
  39. Merged Map outputs=
  40. GC time elapsed (ms)=
  41. CPU time spent (ms)=
  42. Physical memory (bytes) snapshot=
  43. Virtual memory (bytes) snapshot=
  44. Total committed heap usage (bytes)=
  45. org.apache.hadoop.examples.terasort.TeraGen$Counters
  46. CHECKSUM=
  47. File Input Format Counters
  48. Bytes Read=
  49. File Output Format Counters
  50. Bytes Written=
  51. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]#

[root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/lib/hadoop-mapreduce]# hadoop jar hadoop-mapreduce-examples.jar teragen 10000000 /user/yinzhengjie/data/day001/test_input

2>.排序和输出

  1. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]# pwd
  2. /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce
  3. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]#
  4. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]# hadoop jar hadoop-mapreduce-examples.jar terasort /user/yinzhengjie/data/day001/test_input /user/yinzhengjie/data/day001/test_output
  5. // :: INFO terasort.TeraSort: starting
  6. // :: INFO input.FileInputFormat: Total input paths to process :
  7. Spent 151ms computing base-splits.
  8. Spent 3ms computing TeraScheduler splits.
  9. Computing input splits took 155ms
  10. Sampling splits of
  11. Making from sampled records
  12. Computing parititions took 1019ms
  13. Spent 1178ms computing partitions.
  14. // :: INFO mapreduce.JobSubmitter: number of splits:
  15. // :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1558520562958_0002
  16. // :: INFO impl.YarnClientImpl: Submitted application application_1558520562958_0002
  17. // :: INFO mapreduce.Job: The url to track the job: http://node101.yinzhengjie.org.cn:8088/proxy/application_1558520562958_0002/
  18. // :: INFO mapreduce.Job: Running job: job_1558520562958_0002
  19. // :: INFO mapreduce.Job: Job job_1558520562958_0002 running in uber mode : false
  20. // :: INFO mapreduce.Job: map % reduce %
  21. // :: INFO mapreduce.Job: map % reduce %
  22. // :: INFO mapreduce.Job: map % reduce %
  23. // :: INFO mapreduce.Job: map % reduce %
  24. // :: INFO mapreduce.Job: map % reduce %
  25. // :: INFO mapreduce.Job: map % reduce %
  26. // :: INFO mapreduce.Job: map % reduce %
  27. // :: INFO mapreduce.Job: map % reduce %
  28. // :: INFO mapreduce.Job: map % reduce %
  29. // :: INFO mapreduce.Job: map % reduce %
  30. // :: INFO mapreduce.Job: map % reduce %
  31. // :: INFO mapreduce.Job: map % reduce %
  32. // :: INFO mapreduce.Job: map % reduce %
  33. // :: INFO mapreduce.Job: map % reduce %
  34. // :: INFO mapreduce.Job: map % reduce %
  35. // :: INFO mapreduce.Job: map % reduce %
  36. // :: INFO mapreduce.Job: map % reduce %
  37. // :: INFO mapreduce.Job: Job job_1558520562958_0002 completed successfully
  38. // :: INFO mapreduce.Job: Counters:
  39. File System Counters
  40. FILE: Number of bytes read=
  41. FILE: Number of bytes written=
  42. FILE: Number of read operations=
  43. FILE: Number of large read operations=
  44. FILE: Number of write operations=
  45. HDFS: Number of bytes read=
  46. HDFS: Number of bytes written=
  47. HDFS: Number of read operations=
  48. HDFS: Number of large read operations=
  49. HDFS: Number of write operations=
  50. Job Counters
  51. Launched map tasks=
  52. Launched reduce tasks=
  53. Data-local map tasks=
  54. Rack-local map tasks=
  55. Total time spent by all maps in occupied slots (ms)=
  56. Total time spent by all reduces in occupied slots (ms)=
  57. Total time spent by all map tasks (ms)=
  58. Total time spent by all reduce tasks (ms)=
  59. Total vcore-milliseconds taken by all map tasks=
  60. Total vcore-milliseconds taken by all reduce tasks=
  61. Total megabyte-milliseconds taken by all map tasks=
  62. Total megabyte-milliseconds taken by all reduce tasks=
  63. Map-Reduce Framework
  64. Map input records=
  65. Map output records=
  66. Map output bytes=
  67. Map output materialized bytes=
  68. Input split bytes=
  69. Combine input records=
  70. Combine output records=
  71. Reduce input groups=
  72. Reduce shuffle bytes=
  73. Reduce input records=
  74. Reduce output records=
  75. Spilled Records=
  76. Shuffled Maps =
  77. Failed Shuffles=
  78. Merged Map outputs=
  79. GC time elapsed (ms)=
  80. CPU time spent (ms)=
  81. Physical memory (bytes) snapshot=
  82. Virtual memory (bytes) snapshot=
  83. Total committed heap usage (bytes)=
  84. Shuffle Errors
  85. BAD_ID=
  86. CONNECTION=
  87. IO_ERROR=
  88. WRONG_LENGTH=
  89. WRONG_MAP=
  90. WRONG_REDUCE=
  91. File Input Format Counters
  92. Bytes Read=
  93. File Output Format Counters
  94. Bytes Written=
  95. // :: INFO terasort.TeraSort: done
  96. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]#

[root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/lib/hadoop-mapreduce]# hadoop jar hadoop-mapreduce-examples.jar terasort /user/yinzhengjie/data/day001/test_input /user/yinzhengjie/data/day001/test_output

  1. [root@node102.yinzhengjie.org.cn ~]# hdfs dfs -ls /user/yinzhengjie/data/day001
  2. Found items
  3. drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001/test_input
  4. drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001/test_output
  5. [root@node102.yinzhengjie.org.cn ~]#
  6. [root@node102.yinzhengjie.org.cn ~]# hdfs dfs -ls /user/yinzhengjie/data/day001/test_input
  7. Found items
  8. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_input/_SUCCESS
  9. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_input/part-m-
  10. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_input/part-m-
  11. [root@node102.yinzhengjie.org.cn ~]#
  12. [root@node102.yinzhengjie.org.cn ~]# hdfs dfs -ls /user/yinzhengjie/data/day001/test_output
  13. Found items
  14. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/_SUCCESS
  15. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/_partition.lst
  16. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  17. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  18. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  19. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  20. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  21. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  22. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  23. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  24. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  25. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  26. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  27. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  28. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  29. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  30. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  31. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/test_output/part-r-
  32. [root@node102.yinzhengjie.org.cn ~]#

3>.验证输出

  1. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]# pwd
  2. /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce
  3. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]#
  4. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]# hadoop jar hadoop-mapreduce-examples.jar teravalidate /user/yinzhengjie/data/day001/test_output /user/yinzhengjie/data/day001/ts_validate
  5. // :: INFO input.FileInputFormat: Total input paths to process :
  6. Spent 29ms computing base-splits.
  7. Spent 3ms computing TeraScheduler splits.
  8. // :: INFO mapreduce.JobSubmitter: number of splits:
  9. // :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1558520562958_0003
  10. // :: INFO impl.YarnClientImpl: Submitted application application_1558520562958_0003
  11. // :: INFO mapreduce.Job: The url to track the job: http://node101.yinzhengjie.org.cn:8088/proxy/application_1558520562958_0003/
  12. // :: INFO mapreduce.Job: Running job: job_1558520562958_0003
  13. // :: INFO mapreduce.Job: Job job_1558520562958_0003 running in uber mode : false
  14. // :: INFO mapreduce.Job: map % reduce %
  15. // :: INFO mapreduce.Job: map % reduce %
  16. // :: INFO mapreduce.Job: map % reduce %
  17. // :: INFO mapreduce.Job: map % reduce %
  18. // :: INFO mapreduce.Job: map % reduce %
  19. // :: INFO mapreduce.Job: map % reduce %
  20. // :: INFO mapreduce.Job: map % reduce %
  21. // :: INFO mapreduce.Job: map % reduce %
  22. // :: INFO mapreduce.Job: map % reduce %
  23. // :: INFO mapreduce.Job: map % reduce %
  24. // :: INFO mapreduce.Job: map % reduce %
  25. // :: INFO mapreduce.Job: map % reduce %
  26. // :: INFO mapreduce.Job: map % reduce %
  27. // :: INFO mapreduce.Job: Job job_1558520562958_0003 completed successfully
  28. // :: INFO mapreduce.Job: Counters:
  29. File System Counters
  30. FILE: Number of bytes read=
  31. FILE: Number of bytes written=
  32. FILE: Number of read operations=
  33. FILE: Number of large read operations=
  34. FILE: Number of write operations=
  35. HDFS: Number of bytes read=
  36. HDFS: Number of bytes written=
  37. HDFS: Number of read operations=
  38. HDFS: Number of large read operations=
  39. HDFS: Number of write operations=
  40. Job Counters
  41. Launched map tasks=
  42. Launched reduce tasks=
  43. Data-local map tasks=
  44. Rack-local map tasks=
  45. Total time spent by all maps in occupied slots (ms)=
  46. Total time spent by all reduces in occupied slots (ms)=
  47. Total time spent by all map tasks (ms)=
  48. Total time spent by all reduce tasks (ms)=
  49. Total vcore-milliseconds taken by all map tasks=
  50. Total vcore-milliseconds taken by all reduce tasks=
  51. Total megabyte-milliseconds taken by all map tasks=
  52. Total megabyte-milliseconds taken by all reduce tasks=
  53. Map-Reduce Framework
  54. Map input records=
  55. Map output records=
  56. Map output bytes=
  57. Map output materialized bytes=
  58. Input split bytes=
  59. Combine input records=
  60. Combine output records=
  61. Reduce input groups=
  62. Reduce shuffle bytes=
  63. Reduce input records=
  64. Reduce output records=
  65. Spilled Records=
  66. Shuffled Maps =
  67. Failed Shuffles=
  68. Merged Map outputs=
  69. GC time elapsed (ms)=
  70. CPU time spent (ms)=
  71. Physical memory (bytes) snapshot=
  72. Virtual memory (bytes) snapshot=
  73. Total committed heap usage (bytes)=
  74. Shuffle Errors
  75. BAD_ID=
  76. CONNECTION=
  77. IO_ERROR=
  78. WRONG_LENGTH=
  79. WRONG_MAP=
  80. WRONG_REDUCE=
  81. File Input Format Counters
  82. Bytes Read=
  83. File Output Format Counters
  84. Bytes Written=
  85. [root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.-.cdh5.15.1.p0./lib/hadoop-mapreduce]#

[root@node101.yinzhengjie.org.cn /opt/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p0.4/lib/hadoop-mapreduce]# hadoop jar hadoop-mapreduce-examples.jar teravalidate /user/yinzhengjie/data/day001/test_output /user/yinzhengjie/data/day001/ts_validate

  1. [root@node102.yinzhengjie.org.cn ~]# hdfs dfs -ls /user/yinzhengjie/data/day001
  2. Found items
  3. drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001/test_input
  4. drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001/test_output
  5. drwxr-xr-x - root supergroup -- : /user/yinzhengjie/data/day001/ts_validate
  6. [root@node102.yinzhengjie.org.cn ~]#
  7. [root@node102.yinzhengjie.org.cn ~]# hdfs dfs -ls /user/yinzhengjie/data/day001/ts_validate
  8. Found items
  9. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/ts_validate/_SUCCESS
  10. -rw-r--r-- root supergroup -- : /user/yinzhengjie/data/day001/ts_validate/part-r-
  11. [root@node102.yinzhengjie.org.cn ~]#
  12. [root@node102.yinzhengjie.org.cn ~]# hdfs dfs -cat /user/yinzhengjie/data/day001/ts_validate/part-r-00000      #我们可以看到checksum是有内容,说明验证的数据是有序的。
  13. checksum 4c49607ac53602
  14. [root@node102.yinzhengjie.org.cn ~]#
  15. [root@node102.yinzhengjie.org.cn ~]#

Cloudera Certified Associate Administrator案例之Test篇的更多相关文章

  1. Cloudera Certified Associate Administrator案例之Troubleshoot篇

    Cloudera Certified Associate Administrator案例之Troubleshoot篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.调整日志的进 ...

  2. Cloudera Certified Associate Administrator案例之Manage篇

    Cloudera Certified Associate Administrator案例之Manage篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.下载Namenode镜像 ...

  3. Cloudera Certified Associate Administrator案例之Install篇

    Cloudera Certified Associate Administrator案例之Install篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.创建主机模板(为了给主 ...

  4. Cloudera Certified Associate Administrator案例之Configure篇

    Cloudera Certified Associate Administrator案例之Configure篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.下载CDH集群中最 ...

  5. Flume实战案例运维篇

    Flume实战案例运维篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.Flume概述 1>.什么是Flume Flume是一个分布式.可靠.高可用的海量日志聚合系统,支 ...

  6. CNCF基金会的Certified Kubernetes Administrator认证考试计划

    关于CKA考试 CKA(Certified Kubernetes Administrator)是CNCF基金会(Cloud Native Computing Foundation)官方推出的Kuber ...

  7. 分享数百个 HT 工业互联网 2D 3D 可视化应用案例之 2019 篇

    继<分享数百个 HT 工业互联网 2D 3D 可视化应用案例>2018 篇,图扑软件定义 2018 为国内工业互联网可视化的元年后,2019 年里我们与各行业客户进行了更深度合作,拓展了H ...

  8. 数百个 HT 工业互联网 2D 3D 可视化应用案例分享 - 2019 篇

    继<分享数百个 HT 工业互联网 2D 3D 可视化应用案例>2018 篇,图扑软件定义 2018 为国内工业互联网可视化的元年后,2019 年里我们与各行业客户进行了更深度合作,拓展了H ...

  9. robotframework+selenium搭配chrome浏览器,web测试案例(搭建篇)

    这两天发布版本 做的事情有点多,都没有时间努力学习了,先给自己个差评,今天折腾了一天, 把robotframework 和 selenium 还有appnium 都研究了一下 ,大概有个谱,先说说we ...

随机推荐

  1. 人脸识别(基于ArcFace)

    我们先来看看效果 上面是根据图片检测出其中的人脸.每个人脸的年龄还有性别,非常强大 第一步: 登录https://ai.arcsoft.com.cn/,注册开发者账号,身份认证,注册应用,得到APPI ...

  2. 123456123456----updateV#%#1%#%---pinLv###20%%%----com.zzj.ChildEnglis698---前show后广--儿童英语-111111111

    com.zzj.ChildEnglis698---前show后广--儿童英语-111111111

  3. JAVA中生成指定位数随机数的方法总结

    JAVA中生成指定位数随机数的方法很多,下面列举几种比较常用的方法. 方法一.通过Math类 public static String getRandom1(int len) { int rs = ( ...

  4. kafka生产部署

    kafka真实环境部署规划 1. 操作系统选型 因为kafka服务端代码是Scala语言开发的,因此属于JVM系的大数据框架,目前部署最多的3类操作系统主要由Linux ,OS X 和Windows, ...

  5. openshift 使用curl命令访问apiserver

    openshift版本:openshift v3.6.173.0.5 使用oc(同kubectl)命令访问apiserver资源的时候,会使用到/root/.kube/config文件中使用的配置. ...

  6. 说说Java Web中的Web应用程序|乐字节

    大家好,我是乐字节的小乐,今天接着上期文章<Javaweb的概念与C/S.B/S体系结构>继续往下介绍Java Web ,这次要说的是web应用程序. 1. Web 应用程序的工作原理 W ...

  7. python快速开始一-------常见的数据类型

    前置条件:已经安装好python程序 常用的数据类型 Python常见的数据类型: 整数:就是我们数学里面的整数,比如3455,python目前支持最大的32bit或者64bit 长整数(long): ...

  8. git 删除本地分支,删除远程分支

    本地分支 git branch -d 分支名 远程分支 git push origin --delete 分支名 查看所有分支 git branch -a

  9. Html设置问题(设置浏览器上面的图标,移动设备上面页面保存为图标)

    最近开发了一个新的项目,项目完成之后:要求把页面在移动设备上面保存为图标,通过图标直接进入系统入口(这样看着就想APP一样):刚开始通过百度直接设置了,发现有两个问题,第一.图标直接是页面的截图:第二 ...

  10. Django使用distinct报错:DISTINCT ON fields is not supported by this database backend

    具体错误提示是:django.db.utils.NotSupportedError: DISTINCT ON fields is not supported by this database back ...