Spark小课堂Week6 启动日志详解

作为分布式系统,Spark程序是非常难以使用传统方法来进行调试的,所以我们主要的武器是日志,今天会对启动日志进行一下详解。

日志详解

今天主要遍历下Streaming的启动日志。

  • 授权等操作
  1. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
  2. 16/07/19 15:06:04 INFO SparkContext: Running Spark version 1.4.1
  3. 16/07/19 15:06:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  4. 16/07/19 15:06:10 INFO SecurityManager: Changing view acls to:
  5. 16/07/19 15:06:10 INFO SecurityManager: Changing modify acls to:
  6. 16/07/19 15:06:10 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(); users with modify permissions: Set()
  7. 16/07/19 15:06:11 INFO Slf4jLogger: Slf4jLogger started
  • 有中心式架构,元数据服务叫Driver,这里是启动了一个akka服务,会提供rpc调用。
  1. 16/07/19 15:06:11 INFO Remoting: Starting remoting
  2. 16/07/19 15:06:11 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@84.232.207.171:46561]
  3. 16/07/19 15:06:11 INFO Utils: Successfully started service 'sparkDriver' on port 46561.
  • SparkEnv类似一个地址簿,会管理所有Spark自身的服务
  • 启动数据管理服务,这里是初始分配存储,包括分配磁盘和内存两部分
  1. 16/07/19 15:06:11 INFO SparkEnv: Registering MapOutputTracker
  2. 16/07/19 15:06:11 INFO SparkEnv: Registering BlockManagerMaster
  3. 16/07/19 15:06:11 INFO DiskBlockManager: Created local directory at C:\Users\kfzx-zhongw\AppData\Local\Temp\spark-45392ea0-70f1-4562-b251-22521756a1d3\blockmgr-b6eb9263-5a04-4cc9-83b9-fcbe153808b8
  4. 16/07/19 15:06:11 INFO MemoryStore: MemoryStore started with capacity 133.6 MB
  • 这里的file server主要是管理jar包等基础文件的
  1. 16/07/19 15:06:11 INFO HttpFileServer: HTTP File server directory is C:\Users\kfzx-zhongw\AppData\Local\Temp\spark-45392ea0-70f1-4562-b251-22521756a1d3\httpd-69699a3d-70f7-4a99-8ae8-9527ca666cdb
  2. 16/07/19 15:06:11 INFO HttpServer: Starting HTTP Server
  3. 16/07/19 15:06:11 INFO Utils: Successfully started service 'HTTP file server' on port 46562.
  4. 16/07/19 15:06:11 INFO SparkEnv: Registering OutputCommitCoordinator
  • 启动UI服务,底层是一个Jetty
  1. 16/07/19 15:06:11 INFO Utils: Successfully started service 'SparkUI' on port 4040.
  2. 16/07/19 15:06:11 INFO SparkUI: Started SparkUI at http://84.232.207.171:4040
  • 这里启动数据传输服务,是一个netty服务
  1. 16/07/19 15:06:11 INFO Executor: Starting executor ID driver on host localhost
  2. 16/07/19 15:06:12 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46581.
  3. 16/07/19 15:06:12 INFO NettyBlockTransferService: Server created on 46581
  4. 16/07/19 15:06:12 INFO BlockManagerMaster: Trying to register BlockManager
  5. 16/07/19 15:06:12 INFO BlockManagerMasterEndpoint: Registering block manager localhost:46581 with 133.6 MB RAM, BlockManagerId(driver, localhost, 46581)
  6. 16/07/19 15:06:12 INFO BlockManagerMaster: Registered BlockManager
  • 至此,Spark的核心部分启动完成,下面是SparkStreaming的启动。

  • 业务逻辑对象初始化,这里是从后往前调用,从前往后运行。

  1. 16/07/19 15:06:12 INFO ReceiverTracker: ReceiverTracker started
  2. 16/07/19 15:06:12 INFO ForEachDStream: metadataCleanupDelay = -1
  3. 16/07/19 15:06:12 INFO ShuffledDStream: metadataCleanupDelay = -1
  4. 16/07/19 15:06:12 INFO MappedDStream: metadataCleanupDelay = -1
  5. 16/07/19 15:06:12 INFO FlatMappedDStream: metadataCleanupDelay = -1
  6. 16/07/19 15:06:12 INFO SocketInputDStream: metadataCleanupDelay = -1
  7. 16/07/19 15:06:12 INFO SocketInputDStream: Slide time = 10000 ms
  8. 16/07/19 15:06:12 INFO SocketInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
  9. 16/07/19 15:06:12 INFO SocketInputDStream: Checkpoint interval = null
  10. 16/07/19 15:06:12 INFO SocketInputDStream: Remember duration = 10000 ms
  11. 16/07/19 15:06:12 INFO ReceiverTracker: Starting 1 receivers
  12. 16/07/19 15:06:12 INFO SocketInputDStream: Initialized and validated org.apache.spark.streaming.dstream.SocketInputDStream@819867
  13. 16/07/19 15:06:12 INFO FlatMappedDStream: Slide time = 10000 ms
  14. 16/07/19 15:06:12 INFO FlatMappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
  15. 16/07/19 15:06:12 INFO FlatMappedDStream: Checkpoint interval = null
  16. 16/07/19 15:06:12 INFO FlatMappedDStream: Remember duration = 10000 ms
  17. 16/07/19 15:06:12 INFO FlatMappedDStream: Initialized and validated org.apache.spark.streaming.dstream.FlatMappedDStream@18102ee
  18. 16/07/19 15:06:12 INFO MappedDStream: Slide time = 10000 ms
  19. 16/07/19 15:06:12 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
  20. 16/07/19 15:06:12 INFO MappedDStream: Checkpoint interval = null
  21. 16/07/19 15:06:12 INFO MappedDStream: Remember duration = 10000 ms
  22. 16/07/19 15:06:12 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@cd9a2c
  23. 16/07/19 15:06:12 INFO ShuffledDStream: Slide time = 10000 ms
  24. 16/07/19 15:06:12 INFO ShuffledDStream: Storage level = StorageLevel(false, false, false, false, 1)
  25. 16/07/19 15:06:12 INFO ShuffledDStream: Checkpoint interval = null
  26. 16/07/19 15:06:12 INFO ShuffledDStream: Remember duration = 10000 ms
  27. 16/07/19 15:06:12 INFO ShuffledDStream: Initialized and validated org.apache.spark.streaming.dstream.ShuffledDStream@11d1ffb
  28. 16/07/19 15:06:12 INFO ForEachDStream: Slide time = 10000 ms
  29. 16/07/19 15:06:12 INFO ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
  30. 16/07/19 15:06:12 INFO ForEachDStream: Checkpoint interval = null
  31. 16/07/19 15:06:12 INFO ForEachDStream: Remember duration = 10000 ms
  32. 16/07/19 15:06:12 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@167e3df
  • Streaming中最重要的组件,四个组件是从外置内套着的。
  1. 16/07/19 15:06:12 INFO RecurringTimer: Started timer for JobGenerator at time 1468911980000
  2. 16/07/19 15:06:12 INFO JobGenerator: Started JobGenerator at 1468911980000 ms
  3. 16/07/19 15:06:12 INFO JobScheduler: Started JobScheduler
  4. 16/07/19 15:06:12 INFO StreamingContext: StreamingContext started
  • 启动Job0,这个是Streaming的socket数据接收服务
  1. 16/07/19 15:06:12 INFO SparkContext: Starting job: start at JavaOnlineWordCount.java:32
  2. 16/07/19 15:06:12 INFO DAGScheduler: Got job 0 (start at JavaOnlineWordCount.java:32) with 1 output partitions (allowLocal=false)
  3. 16/07/19 15:06:12 INFO DAGScheduler: Final stage: ResultStage 0(start at JavaOnlineWordCount.java:32)
  4. 16/07/19 15:06:12 INFO DAGScheduler: Parents of final stage: List()
  5. 16/07/19 15:06:12 INFO DAGScheduler: Missing parents: List()
  6. 16/07/19 15:06:12 INFO DAGScheduler: Submitting ResultStage 0 (ParallelCollectionRDD[0] at start at JavaOnlineWordCount.java:32), which has no missing parents
  7. 16/07/19 15:06:13 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
  8. 16/07/19 15:06:13 INFO MemoryStore: ensureFreeSpace(46352) called with curMem=0, maxMem=140142182
  9. 16/07/19 15:06:13 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 45.3 KB, free 133.6 MB)
  10. 16/07/19 15:06:13 INFO MemoryStore: ensureFreeSpace(15070) called with curMem=46352, maxMem=140142182
  11. 16/07/19 15:06:13 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.7 KB, free 133.6 MB)
  12. 16/07/19 15:06:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:46581 (size: 14.7 KB, free: 133.6 MB)
  13. 16/07/19 15:06:13 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:874
  14. 16/07/19 15:06:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (ParallelCollectionRDD[0] at start at JavaOnlineWordCount.java:32)
  15. 16/07/19 15:06:13 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
  16. 16/07/19 15:06:13 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 2000 bytes)
  17. 16/07/19 15:06:13 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
  18. 16/07/19 15:06:13 INFO RecurringTimer: Started timer for BlockGenerator at time 1468911973400
  19. 16/07/19 15:06:13 INFO BlockGenerator: Started block pushing thread
  20. 16/07/19 15:06:13 INFO BlockGenerator: Started BlockGenerator
  21. 16/07/19 15:06:13 INFO ReceiverSupervisorImpl: Starting receiver
  22. 16/07/19 15:06:13 INFO ReceiverSupervisorImpl: Called receiver onStart
  23. 16/07/19 15:06:13 INFO SocketReceiver: Connecting to localhost:9999
  24. 16/07/19 15:06:13 INFO SocketReceiver: Connected to localhost:9999
  25. 16/07/19 15:06:13 INFO ReceiverTracker: Registered receiver for stream 0 from 84.232.207.171:46561
  • 获取外部输入数据,这里会调用memoryStore
  1. 16/07/19 15:06:19 INFO MemoryStore: ensureFreeSpace(15) called with curMem=61422, maxMem=140142182
  2. 16/07/19 15:06:19 INFO MemoryStore: Block input-0-1468911978800 stored as bytes in memory (estimated size 15.0 B, free 133.6 MB)
  3. 16/07/19 15:06:19 INFO BlockManagerInfo: Added input-0-1468911978800 in memory on localhost:46581 (size: 15.0 B, free: 133.6 MB)
  4. 16/07/19 15:06:19 WARN BlockManager: Block input-0-1468911978800 replicated to only 0 peer(s) instead of 1 peers
  5. 16/07/19 15:06:19 INFO BlockGenerator: Pushed block input-0-1468911978800
  • 执行业务逻辑计算,这里是核心!!!
  1. 16/07/19 15:06:20 INFO JobScheduler: Added jobs for time 1468911980000 ms
  2. 16/07/19 15:06:20 INFO JobScheduler: Starting job streaming job 1468911980000 ms.0 from job set of time 1468911980000 ms
  3. 16/07/19 15:06:20 INFO SparkContext: Starting job: print at JavaOnlineWordCount.java:30
  4. 16/07/19 15:06:20 INFO DAGScheduler: Registering RDD 3 (mapToPair at JavaOnlineWordCount.java:28)
  5. 16/07/19 15:06:20 INFO DAGScheduler: Got job 1 (print at JavaOnlineWordCount.java:30) with 1 output partitions (allowLocal=true)
  6. 16/07/19 15:06:20 INFO DAGScheduler: Final stage: ResultStage 2(print at JavaOnlineWordCount.java:30)
  7. 16/07/19 15:06:20 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1)
  8. 16/07/19 15:06:20 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 1)
  9. 16/07/19 15:06:20 INFO DAGScheduler: Submitting ShuffleMapStage 1 (MapPartitionsRDD[3] at mapToPair at JavaOnlineWordCount.java:28), which has no missing parents
  10. 16/07/19 15:06:20 INFO MemoryStore: ensureFreeSpace(4016) called with curMem=61437, maxMem=140142182
  11. 16/07/19 15:06:20 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.9 KB, free 133.6 MB)
  12. 16/07/19 15:06:20 INFO MemoryStore: ensureFreeSpace(2297) called with curMem=65453, maxMem=140142182
  13. 16/07/19 15:06:20 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.2 KB, free 133.6 MB)
  14. 16/07/19 15:06:20 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:46581 (size: 2.2 KB, free: 133.6 MB)
  15. 16/07/19 15:06:20 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874
  16. 16/07/19 15:06:20 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[3] at mapToPair at JavaOnlineWordCount.java:28)
  17. 16/07/19 15:06:20 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
  18. 16/07/19 15:06:20 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, NODE_LOCAL, 1277 bytes)
  19. 16/07/19 15:06:20 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
  20. 16/07/19 15:06:20 INFO BlockManager: Found block input-0-1468911978800 locally
  21. 16/07/19 15:06:20 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 894 bytes result sent to driver
  22. 16/07/19 15:06:20 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 80 ms on localhost (1/1)
  23. 16/07/19 15:06:20 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
  24. 16/07/19 15:06:20 INFO DAGScheduler: ShuffleMapStage 1 (mapToPair at JavaOnlineWordCount.java:28) finished in 0.084 s
  25. 16/07/19 15:06:20 INFO DAGScheduler: looking for newly runnable stages
  26. 16/07/19 15:06:20 INFO DAGScheduler: running: Set(ResultStage 0)
  27. 16/07/19 15:06:20 INFO DAGScheduler: waiting: Set(ResultStage 2)
  28. 16/07/19 15:06:20 INFO DAGScheduler: failed: Set()
  29. 16/07/19 15:06:20 INFO DAGScheduler: Missing parents for ResultStage 2: List()
  30. 16/07/19 15:06:20 INFO DAGScheduler: Submitting ResultStage 2 (ShuffledRDD[4] at reduceByKey at JavaOnlineWordCount.java:29), which is now runnable
  31. 16/07/19 15:06:20 INFO MemoryStore: ensureFreeSpace(3056) called with curMem=67750, maxMem=140142182
  32. 16/07/19 15:06:20 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.0 KB, free 133.6 MB)
  33. 16/07/19 15:06:20 INFO MemoryStore: ensureFreeSpace(1825) called with curMem=70806, maxMem=140142182
  34. 16/07/19 15:06:20 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1825.0 B, free 133.6 MB)
  35. 16/07/19 15:06:20 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:46581 (size: 1825.0 B, free: 133.6 MB)
  36. 16/07/19 15:06:20 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:874
  37. 16/07/19 15:06:20 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (ShuffledRDD[4] at reduceByKey at JavaOnlineWordCount.java:29)
  38. 16/07/19 15:06:20 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
  39. 16/07/19 15:06:20 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, PROCESS_LOCAL, 1165 bytes)
  40. 16/07/19 15:06:20 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)
  41. 16/07/19 15:06:20 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 1 blocks
  42. 16/07/19 15:06:20 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 5 ms
  43. 16/07/19 15:06:20 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 882 bytes result sent to driver
  44. 16/07/19 15:06:20 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 33 ms on localhost (1/1)
  45. 16/07/19 15:06:20 INFO DAGScheduler: ResultStage 2 (print at JavaOnlineWordCount.java:30) finished in 0.033 s
  46. 16/07/19 15:06:20 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
  47. 16/07/19 15:06:20 INFO DAGScheduler: Job 1 finished: print at JavaOnlineWordCount.java:30, took 0.184378 s
  48. ...
  • 结果输出:
  1. -------------------------------------------
  2. Time: 1468911980000 ms
  3. -------------------------------------------
  4. (123,1)
  5. (1231,1)
  • 执行清理操作
  1. 16/07/19 15:06:20 INFO JobScheduler: Finished job streaming job 1468911980000 ms.0 from job set of time 1468911980000 ms
  2. 16/07/19 15:06:20 INFO JobScheduler: Total delay: 0.440 s for time 1468911980000 ms (execution: 0.359 s)
  3. 16/07/19 15:06:20 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
  4. 16/07/19 15:06:20 INFO InputInfoTracker: remove old batch metadata:

关于

小课堂是在公司进行内部交流的一系列主题,偏基础,但是比较零散,持续更新中。

Spark小课堂Week6 启动日志详解的更多相关文章

  1. Linux启动过程详解(inittab、rc.sysinit、rcX.d、rc.local)

    启动第一步--加载BIOS 当你打开计算机电源,计算机会首先加载BIOS信息,BIOS信息是如此的重要,以至于计算机必须在最开始就找到它.这是因为BIOS中包含了CPU的相关信息.设备启动顺序信息.硬 ...

  2. Linux启动过程详解

    Linux启动过程详解 附上两张图,加深记忆 图1: 图2: 第一张图比较简洁明了,下面对第一张图的步骤进行详解: 加载BIOS 当你打开计算机电源,计算机会首先加载BIOS信息,BIOS信息是如此的 ...

  3. linux 开机启动过程详解

    Linux开机执行内核后会启动init进程,该进程根据runlevel(如x)执行/etc/rcx.d/下的程序,其下的程序是符号链接,真正的程序放在/etc/init.d/下.开机启动的程序(服务等 ...

  4. Linux开机启动程序详解

    Linux开机启动程序详解我们假设大家已经熟悉其它操作系统的引导过程,了解硬件的自检引导步骤,就只从Linux操作系统的引导加载程序(对个人电脑而言通常是LILO)开始,介绍Linux开机引导的步骤. ...

  5. Linux开机启动程序详解[转]

    Linux开机启动程序详解 我们假设大家已经熟悉其它操作系统的引导过程,了解硬件的自检引导步骤,就只从Linux操作系统的引导加载程序(对个人电脑而言通常是LILO)开始,介绍Linux开机引导的步骤 ...

  6. linux系统设置服务开机启动3种方法,Linux开机启动程序详解

    linux系统设置服务开机启动 方法1:.利用ntsysv伪图形进行设置,利用root登陆 终端命令下输入ntsysv 回车:如下图     方法2:利用命令行chkconfig命令进行设置 简要说明 ...

  7. Spark小课堂Week2 Hello Streaming

    Spark小课堂Week2 Hello Streaming 我们是怎么进行数据处理的? 批量方式处理 目前最常采用的是批量方式处理,指非工作时间运行,定时或者事件触发.这种方式的好处是逻辑简单,不影响 ...

  8. Spark小课堂Week1 Hello Spark

    Spark小课堂Week1 Hello Spark 看到Spark这个词,你的第一印象是什么? 这是一朵"火花",官方的定义是Spark是一个高速的.通用的.分布式计算系统!!! ...

  9. Linux启动流程详解【转载】

    在BIOS阶段,计算机的行为基本上被写死了,可以做的事情并不多:一般就是通电.BIOS.主引导记录.操作系统这四步.所以我们一般认为加载内核是linux启动流程的第一步. 第一步.加载内核 操作系统接 ...

随机推荐

  1. JS获取事件源对象

    发现问题: 在复杂事件处理过程中,很可能会丢失event事件对象,特别是IE和FireFox两大浏览器,这个时候要捕获事件源就非常困难…… 如果在事件处理过程中,需要不断地传递event事件对象作为参 ...

  2. Javascript与Ajax

    不使用jquery来处理ajax请求该怎么做? 首先要明确html中的某些数据需要从服务端获得,也就是客户端向服务端请求(request)数据,服务端就响应(response)这个请求,把客户端要的数 ...

  3. JMX笔记(一)

    上篇 JMX初体验 使用HtmlAdaptorServer提供的界面实现了调用MBean,除此之外,还可以使用rmi方式连接调用MBeanServer 要连接,自然要有url:service:jmx: ...

  4. 给jdk写注释系列之jdk1.6容器(10)-Stack&Vector源码解析

    前面我们已经接触过几种数据结构了,有数组.链表.Hash表.红黑树(二叉查询树),今天再来看另外一种数据结构:栈.      什么是栈呢,我就不找它具体的定义了,直接举个例子,栈就相当于一个很窄的木桶 ...

  5. hdu 4000 树状数组

    思路:找出所有 a<b<c||a<c<b的情况,在找出所有的a<b<c的情况.他们相减剩下就是a<c<b的情况了. #include<iostre ...

  6. poj 2942 点的双连通分量

    思路: 对于该图,直接用建图貌似没法解,所以也很容易想到建补图,这样存在边的两个点就能再圆桌上做一起.也就将问题转化为对双连通分量中是否存在奇圈了. 我们将每次查询的边保存在stack中,当遇到关键点 ...

  7. Leetcode 338. Counting Bits

    Given a non negative integer number num. For every numbers i in the range 0 ≤ i ≤ num calculate the ...

  8. Angular 2.0 从0到1 (七)

    第一节:Angular 2.0 从0到1 (一)第二节:Angular 2.0 从0到1 (二)第三节:Angular 2.0 从0到1 (三)第四节:Angular 2.0 从0到1 (四)第五节: ...

  9. 拼接json时小心C#中bool类型转化

    C#中bool类型的值,在ToString时会有如下转化:true—>Ture ; false—>False这是拼接到json串中就会出现如下结果:{ "no": &q ...

  10. ASP.NET运行原理

    1,ASP.NET运行原理: 客户端向服务器发出请求 → 服务器处理请求→ 处理好的数据以报文发给浏览器 → 浏览器显示请求结果 2,Chrome浏览器,查看请求过程:F12打开浏览器的调试窗口: 3 ...