live555支持单播和组播,我们先分析单播的流媒体服务端,后面分析组播的流媒体服务端。

一、单播的流媒体服务端:

  1.      // Create the RTSP server:
  2. RTSPServer* rtspServer = NULL;
  3. // Normal case: Streaming from a built-in RTSP server:
  4. rtspServer = RTSPServer::createNew(*env, rtspServerPortNum, NULL);
  5. if (rtspServer == NULL) {
  6. *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
  7. exit();
  8. }
  9.  
  10. *env << "...done initializing \n";
  11.  
  12. if( streamingMode == STREAMING_UNICAST )
  13. {
  14. ServerMediaSession* sms = ServerMediaSession::createNew(*env,
  15. H264StreamName[video_type],
  16. H264StreamName[video_type],
  17. streamDescription,
  18. streamingMode == STREAMING_MULTICAST_SSM);
  19. sms->addSubsession(WISH264VideoServerMediaSubsession::createNew(sms->envir(), *H264InputDevice[video_type], H264VideoBitrate));
  20. sms->addSubsession(WISPCMAudioServerMediaSubsession::createNew(sms->envir(), *H264InputDevice[video_type]));
  21.  
  22. rtspServer->addServerMediaSession(sms);
  23.  
  24. char *url = rtspServer->rtspURL(sms);
  25. *env << "Play this stream using the URL:\t" << url << "\n";
  26. delete[] url;
  27. }
          

      // Begin the LIVE555 event loop:
      env->taskScheduler().doEventLoop(&watchVariable); // does not return

我们一步一步分析:

1>  rtspServer = RTSPServer::createNew(*env, rtspServerPortNum, NULL);

  1. RTSPServer*
  2. RTSPServer::createNew(UsageEnvironment& env, Port ourPort,
  3. UserAuthenticationDatabase* authDatabase,
  4. unsigned reclamationTestSeconds)
  5. {
  6. int ourSocket = -;
  7.  
  8. do {
  9. int ourSocket = setUpOurSocket(env, ourPort);
  10. if (ourSocket == -) break;
  11.  
  12. return new RTSPServer(env, ourSocket, ourPort, authDatabase, reclamationTestSeconds);
  13. } while ();
  14.  
  15. if (ourSocket != -) ::closeSocket(ourSocket);
  16.  
  17. return NULL;
  18. }

  此函数首先创建一个rtsp协议的socket,并且监听rtspServerPortNum端口,创建RTSPServer类的实例。下面我们看下RTSPServer的构造函数:

  1. RTSPServer::RTSPServer(UsageEnvironment& env,
  2. int ourSocket, Port ourPort,
  3. UserAuthenticationDatabase* authDatabase,
  4. unsigned reclamationTestSeconds)
  5. : Medium(env),
  6. fServerSocket(ourSocket), fServerPort(ourPort),
  7. fAuthDB(authDatabase), fReclamationTestSeconds(reclamationTestSeconds),
  8. fServerMediaSessions(HashTable::create(STRING_HASH_KEYS)),
  9. fSessionIdCounter()
  10. {
  11. #ifdef USE_SIGNALS
  12. // Ignore the SIGPIPE signal, so that clients on the same host that are killed
  13. // don't also kill us:
  14. signal(SIGPIPE, SIG_IGN);
  15. #endif
  16.  
  17. // Arrange to handle connections from others:
  18. env.taskScheduler().turnOnBackgroundReadHandling(fServerSocket, (TaskScheduler::BackgroundHandlerProc*)&incomingConnectionHandler, this);
  19. }

  RTSPServer构造函数,初始化fServerMediaSessions为创建的HashTable,初始化fServerSocket为我们前面创建的tcp socket,fServerPort为我们监听的端口rtspServerPortNum,并且向taskScheduler注册fServerSocket的任务函数incomingConnectionHandler,这个任务函数主要监听是否有新的客服端连接accept,如果有新的客服端接入,创建RTSPClientSession的实例。

  RTSPClientSession要提供什么功能呢?可以想象:需要监听客户端的rtsp请求并回应它,需要在DESCRIBE请求中返回所请求的流的信息,需要在SETUP请求中建立起RTP会话,需要在TEARDOWN请求中关闭RTP会话,等等...

  1. RTSPServer::RTSPClientSession::RTSPClientSession(RTSPServer& ourServer, unsigned sessionId, int clientSocket, struct sockaddr_in clientAddr)
  2. : fOurServer(ourServer), fOurSessionId(sessionId),
  3. fOurServerMediaSession(NULL),
  4. fClientSocket(clientSocket), fClientAddr(clientAddr),
  5. fLivenessCheckTask(NULL),
  6. fIsMulticast(False), fSessionIsActive(True), fStreamAfterSETUP(False),
  7. fTCPStreamIdCount(), fNumStreamStates(), fStreamStates(NULL)
  8. {
  9. // Arrange to handle incoming requests:
  10. resetRequestBuffer();
  11. envir().taskScheduler().turnOnBackgroundReadHandling(fClientSocket,(TaskScheduler::BackgroundHandlerProc*)&incomingRequestHandler, this);
  12. noteLiveness();
  13. }

  上面这个函数是RTSPClientSession的构造函数,初始化sessionId为++fSessionIdCounter,初始化fClientSocket为accept创建的socket(clientSocket),初始化fClientAddr为accept接收的客服端地址,也向taskScheduler注册了fClientSocket的认为函数incomingRequestHandler。

  incomingRequestHandler会调用incomingRequestHandler1,incomingRequestHandler1函数定义如下:

  1. void RTSPServer::RTSPClientSession::incomingRequestHandler1()
  2. {
  3. noteLiveness();
  4.  
  5. struct sockaddr_in dummy; // 'from' address, meaningless in this case
  6. Boolean endOfMsg = False;
  7. unsigned char* ptr = &fRequestBuffer[fRequestBytesAlreadySeen];
  8.  
  9. int bytesRead = readSocket(envir(), fClientSocket, ptr, fRequestBufferBytesLeft, dummy);
  10. if (bytesRead <= || (unsigned)bytesRead >= fRequestBufferBytesLeft) {
  11. // Either the client socket has died, or the request was too big for us.
  12. // Terminate this connection:
  13. #ifdef DEBUG
  14. fprintf(stderr, "RTSPClientSession[%p]::incomingRequestHandler1() read %d bytes (of %d); terminating connection!\n", this, bytesRead, fRequestBufferBytesLeft);
  15. #endif
  16. delete this;
  17. return;
  18. }
  19. #ifdef DEBUG
  20. ptr[bytesRead] = '\0';
  21. fprintf(stderr, "RTSPClientSession[%p]::incomingRequestHandler1() read %d bytes:%s\n", this, bytesRead, ptr);
  22. #endif
  23.  
  24. // Look for the end of the message: <CR><LF><CR><LF>
  25. unsigned char *tmpPtr = ptr;
  26. if (fRequestBytesAlreadySeen > ) --tmpPtr;
  27. // in case the last read ended with a <CR>
  28. while (tmpPtr < &ptr[bytesRead-]) {
  29. if (*tmpPtr == '\r' && *(tmpPtr+) == '\n') {
  30. if (tmpPtr - fLastCRLF == ) { // This is it:
  31. endOfMsg = ;
  32. break;
  33. }
  34. fLastCRLF = tmpPtr;
  35. }
  36. ++tmpPtr;
  37. }
  38.  
  39. fRequestBufferBytesLeft -= bytesRead;
  40. fRequestBytesAlreadySeen += bytesRead;
  41.  
  42. if (!endOfMsg) return; // subsequent reads will be needed to complete the request
  43.  
  44. // Parse the request string into command name and 'CSeq',
  45. // then handle the command:
  46. fRequestBuffer[fRequestBytesAlreadySeen] = '\0';
  47. char cmdName[RTSP_PARAM_STRING_MAX];
  48. char urlPreSuffix[RTSP_PARAM_STRING_MAX];
  49. char urlSuffix[RTSP_PARAM_STRING_MAX];
  50. char cseq[RTSP_PARAM_STRING_MAX];
  51. if (!parseRTSPRequestString((char*)fRequestBuffer, fRequestBytesAlreadySeen,
  52. cmdName, sizeof cmdName,
  53. urlPreSuffix, sizeof urlPreSuffix,
  54. urlSuffix, sizeof urlSuffix,
  55. cseq, sizeof cseq))
  56. {
  57. #ifdef DEBUG
  58. fprintf(stderr, "parseRTSPRequestString() failed!\n");
  59. #endif
  60. handleCmd_bad(cseq);
  61. } else {
  62. #ifdef DEBUG
  63. fprintf(stderr, "parseRTSPRequestString() returned cmdName \"%s\", urlPreSuffix \"%s\", urlSuffix \"%s\"\n", cmdName, urlPreSuffix, urlSuffix);
  64. #endif
  65. if (strcmp(cmdName, "OPTIONS") == ) {
  66. handleCmd_OPTIONS(cseq);
  67. } else if (strcmp(cmdName, "DESCRIBE") == ) {
  68. printf("incomingRequestHandler1 ~~~~~~~~~~~~~~\n");
  69. handleCmd_DESCRIBE(cseq, urlSuffix, (char const*)fRequestBuffer);
  70. } else if (strcmp(cmdName, "SETUP") == ) {
  71. handleCmd_SETUP(cseq, urlPreSuffix, urlSuffix, (char const*)fRequestBuffer);
  72. } else if (strcmp(cmdName, "TEARDOWN") ==
  73. || strcmp(cmdName, "PLAY") ==
  74. || strcmp(cmdName, "PAUSE") ==
  75. || strcmp(cmdName, "GET_PARAMETER") == ) {
  76. handleCmd_withinSession(cmdName, urlPreSuffix, urlSuffix, cseq, (char const*)fRequestBuffer);
  77. } else {
  78. handleCmd_notSupported(cseq);
  79. }
  80. }
  81.  
  82. #ifdef DEBUG
  83. fprintf(stderr, "sending response: %s", fResponseBuffer);
  84. #endif
  85. send(fClientSocket, (char const*)fResponseBuffer, strlen((char*)fResponseBuffer), );
  86.  
  87. if (strcmp(cmdName, "SETUP") == && fStreamAfterSETUP) {
  88. // The client has asked for streaming to commence now, rather than after a
  89. // subsequent "PLAY" command. So, simulate the effect of a "PLAY" command:
  90. handleCmd_withinSession("PLAY", urlPreSuffix, urlSuffix, cseq, (char const*)fRequestBuffer);
  91. }
  92.  
  93. resetRequestBuffer(); // to prepare for any subsequent request
  94. if (!fSessionIsActive) delete this;
  95. }

  此函数,我们可以看到rtsp的协议的各个命令的接收处理和应答。

2> ServerMediaSession* sms = ServerMediaSession::createNew(... ...)

  创建ServerMediaSession类的实例,初始化fStreamName为"h264_ch1",fInfoSDPString为"h264_ch1",fDescriptionSDPString为"RTSP/RTP stream from NETRA",fMiscSDPLines为null,fCreationTime获取的时间,fIsSSM为false。

3> sms->addSubsession(WISH264VideoServerMediaSubsession::createNew(... ...);

  WISH264VideoServerMediaSubsession::createNew():这个函数的主要目的是创建OnDemandServerMediaSubsession类的实例,这个类在前面已经分析,是单播时候必须创建的,初始化fWISInput为*H264InputDevice[video_type]。

  sms->addSubsession() 是将WISH264VideoServerMediaSubsession类的实例加入到fSubsessionsTail链表首节点中。

4> sms->addSubsession(WISPCMAudioServerMediaSubsession::createNew(... ...);

  WISPCMAudioServerMediaSubsession::createNew():这个函数的主要目的是创建OnDemandServerMediaSubsession类的实例,这个类在前面已经分析,是单播时候必须创建的,初始化fWISInput为*H264InputDevice[video_type]。

  sms->addSubsession() 是将WISPCMAudioServerMediaSubsession类的实例加入到fSubsessionsTail->fNext中。

5> rtspServer->addServerMediaSession(sms)

  将rtspServer加入到fServerMediaSessions的哈希表中。

6> env->taskScheduler().doEventLoop(&watchVariable); 

  这个doEventLoop在前面已经分析过,主要处理socket任务和延迟任务。   

二、组播的流媒体服务器:

  1. // Create the RTSP server:
  2. RTSPServer* rtspServer = NULL;
  3. // Normal case: Streaming from a built-in RTSP server:
  4. rtspServer = RTSPServer::createNew(*env, rtspServerPortNum, NULL);
  5. if (rtspServer == NULL) {
  6. *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
  7. exit();
  8. }
  9.  
  10. *env << "...done initializing \n";
  11.  
  12. if( streamingMode == STREAMING_UNICAST )
  13. {
  14.         ... ...
  15. }
  16. else
  17. {
  18. if (streamingMode == STREAMING_MULTICAST_SSM)
  19. {
  20. if (multicastAddress == )
  21. multicastAddress = chooseRandomIPv4SSMAddress(*env);
  22. } else if (multicastAddress != ) {
  23. streamingMode = STREAMING_MULTICAST_ASM;
  24. }
  25.  
  26. struct in_addr dest;
  27.      dest.s_addr = multicastAddress;
  28.  
  29. const unsigned char ttl = ;
  30.  
  31. // For RTCP:
  32. const unsigned maxCNAMElen = ;
  33. unsigned char CNAME[maxCNAMElen + ];
  34. gethostname((char *) CNAME, maxCNAMElen);
  35. CNAME[maxCNAMElen] = '\0'; // just in case
  36.  
  37. ServerMediaSession* sms;
  38. sms = ServerMediaSession::createNew(*env, H264StreamName[video_type], H264StreamName[video_type], streamDescription,streamingMode == STREAMING_MULTICAST_SSM);
  39.  
  40. /* VIDEO Channel initial */
  41. if()
  42. {
  43. // Create 'groupsocks' for RTP and RTCP:
  44. const Port rtpPortVideo(videoRTPPortNum);
  45. const Port rtcpPortVideo(videoRTPPortNum+);
  46.  
  47. rtpGroupsockVideo = new Groupsock(*env, dest, rtpPortVideo, ttl);
  48. rtcpGroupsockVideo = new Groupsock(*env, dest, rtcpPortVideo, ttl);
  49.  
  50. if (streamingMode == STREAMING_MULTICAST_SSM) {
  51. rtpGroupsockVideo->multicastSendOnly();
  52. rtcpGroupsockVideo->multicastSendOnly();
  53. }
  54.  
  55. setVideoRTPSinkBufferSize();
  56. sinkVideo = H264VideoRTPSink::createNew(*env, rtpGroupsockVideo,, 0x42, "h264");
  57.  
  58. // Create (and start) a 'RTCP instance' for this RTP sink:
  59. unsigned totalSessionBandwidthVideo = (Mpeg4VideoBitrate+)/; // in kbps; for RTCP b/w share
  60. rtcpVideo = RTCPInstance::createNew(*env, rtcpGroupsockVideo,
  61. totalSessionBandwidthVideo, CNAME,
  62. sinkVideo, NULL /* we're a server */ ,
  63. streamingMode == STREAMING_MULTICAST_SSM);
  64.  
  65. // Note: This starts RTCP running automatically
  66. sms->addSubsession(PassiveServerMediaSubsession::createNew(*sinkVideo, rtcpVideo));
  67.  
  68. sourceVideo = H264VideoStreamFramer::createNew(*env, H264InputDevice[video_type]->videoSource());

  69. // Start streaming:
  70. sinkVideo->startPlaying(*sourceVideo, NULL, NULL);
  71. }
  72.  
  73. /* AUDIO Channel initial */
  74. if()
  75. {
  76. // there's a separate RTP stream for audio
  77. // Create 'groupsocks' for RTP and RTCP:
  78. const Port rtpPortAudio(audioRTPPortNum);
  79. const Port rtcpPortAudio(audioRTPPortNum+);
  80.  
  81. rtpGroupsockAudio = new Groupsock(*env, dest, rtpPortAudio, ttl);
  82. rtcpGroupsockAudio = new Groupsock(*env, dest, rtcpPortAudio, ttl);
  83.  
  84. if (streamingMode == STREAMING_MULTICAST_SSM)
  85. {
  86. rtpGroupsockAudio->multicastSendOnly();
  87. rtcpGroupsockAudio->multicastSendOnly();
  88. }
  89.  
  90. if( audioSamplingFrequency == )
  91. sinkAudio = SimpleRTPSink::createNew(*env, rtpGroupsockAudio, , audioSamplingFrequency, "audio", "PCMU", );
  92.        else
  93. sinkAudio = SimpleRTPSink::createNew(*env, rtpGroupsockAudio, , audioSamplingFrequency, "audio", "PCMU", );
  94.  
  95. // Create (and start) a 'RTCP instance' for this RTP sink:
  96. unsigned totalSessionBandwidthAudio = (audioOutputBitrate+)/; // in kbps; for RTCP b/w share
  97. rtcpAudio = RTCPInstance::createNew(*env, rtcpGroupsockAudio,
  98. totalSessionBandwidthAudio, CNAME,
  99. sinkAudio, NULL /* we're a server */,
  100. streamingMode == STREAMING_MULTICAST_SSM);
  101.  
  102.        // Note: This starts RTCP running automatically
  103.        sms->addSubsession(PassiveServerMediaSubsession::createNew(*sinkAudio, rtcpAudio));
  104.  
  105.    sourceAudio = H264InputDevice[video_type]->audioSource();
  106.  
  107. // Start streaming:
  108. sinkAudio->startPlaying(*sourceAudio, NULL, NULL);
  109. }
  110.  
  111. rtspServer->addServerMediaSession(sms);
  112.  
  113. {
  114. struct in_addr dest; dest.s_addr = multicastAddress;
  115. char *url = rtspServer->rtspURL(sms);
  116. //char *url2 = inet_ntoa(dest);
  117. *env << "Mulicast Play this stream using the URL:\n\t" << url << "\n";
  118. //*env << "2 Mulicast addr:\n\t" << url2 << "\n";
  119. delete[] url;
  120. }
  121. }
  122.  
  123. // Begin the LIVE555 event loop:
  124. env->taskScheduler().doEventLoop(&watchVariable); // does not return

1> rtspServer = RTSPServer::createNew(*env, rtspServerPortNum, NULL);

  同前面单播的分析一样。

2> sms = ServerMediaSession::createNew(... ...)

  同前面单播的分析一样。

3> 视频

  1. 创建视频rtp、rtcp的Groupsock类的实例,实现rtp和rtcp的udp通信socket。这里应该了解下ASM和SSM。

  2. 创建RTPSink类的实例,实现视频数据的RTP打包传输。

  3. 创建RTCPInstance类的实例,实现RTCP打包传输。

  4. 创建PassiveServerMediaSubsession类的实例,并加入到fSubsessionsTail链表中的首节点。

  5. 创建FramedSource类的实例,实现一帧视频数据的获取。

  5. 开始发送RTP和RTCP数据到组播地址。

4> 音频 

  1. 创建音频rtp、rtcp的Groupsock类的实例,实现rtp和rtcp的udp通信socket。这里应该了解下ASM和SSM。

  2. 创建RTPSink类的实例,实现音频数据的RTP打包传输。

  3. 创建RTCPInstance类的实例,实现RTCP打包传输。

  4. 创建PassiveServerMediaSubsession类的实例,并加入到fSubsessionsTail链表中的下一个节点。

  5. 创建FramedSource类的实例,实现一帧音频数据的获取。

  5. 开始发送RTP和RTCP数据到组播地址。

5> rtspServer->addServerMediaSession(sms)

  同前面单播的分析一样。

6> env->taskScheduler().doEventLoop(&watchVariable)

  同前面单播的分析一样。

三、单播和组播的区别

1> 创建socket的时候,组播一开始就创建了,而单播的则是根据收到的“SETUP”命令创建相应的socket。

2> startPlaying的时候,组播一开始就发送数据到组播地址,而单播则是根据收到的“PLAY”命令开始startPlaying。

四、startPlaying分析

  首先分析组播: 

  sinkVideo->startPlaying()实现不在H264VideoRTPSink类中,也不在RTPSink类中,而是在MediaSink类中实现:

  1. Boolean MediaSink::startPlaying(MediaSource& source,
  2. afterPlayingFunc* afterFunc,
  3. void* afterClientData)
  4. {
  5. // Make sure we're not already being played:
  6. if (fSource != NULL) {
  7. envir().setResultMsg("This sink is already being played");
  8. return False;
  9. }
  10.  
  11. // Make sure our source is compatible:
  12. if (!sourceIsCompatibleWithUs(source)) {
  13. envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");
  14. return False;
  15. }
  16. fSource = (FramedSource*)&source;
  17.  
  18. fAfterFunc = afterFunc;
  19. fAfterClientData = afterClientData;
  20.  
  21. return continuePlaying();
  22. }

  这里发现调用了continuePlaying()函数,那这个函数在哪里实现的呢?因为sinkVideo是通过 H264VideoRTPSink::createNew()实现,返回的H264VideoRTPSink类的实例,因此我们可以判定这个continuePlaying()在H264VideoRTPSink类实现。

  1. Boolean H264VideoRTPSink::continuePlaying()
  2. {
  3. // First, check whether we have a 'fragmenter' class set up yet.
  4. // If not, create it now:
  5. if (fOurFragmenter == NULL) {
  6. fOurFragmenter = new H264FUAFragmenter(envir(), fSource, OutPacketBuffer::maxSize, ourMaxPacketSize() - /*RTP hdr size*/);
  7. fSource = fOurFragmenter;
  8. }
  9.  
  10. //printf("function=%s line=%d\n",__func__,__LINE__);
  11. // Then call the parent class's implementation:
  12. return MultiFramedRTPSink::continuePlaying();
  13. }

  看到这里我们发现调用的是MultiFramedRTPSink类的成员函数continuePlaying,看下这个函数的实现:

  1. Boolean MultiFramedRTPSink::continuePlaying()
  2. {
  3. // Send the first packet.
  4. // (This will also schedule any future sends.)
  5. buildAndSendPacket(True);
  6. return True;
  7. }

  这里我们发现了buildAndSendPacket(),这个函数实现:

  1. void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket)
  2. {
  3. //此函数中主要是准备rtp包的头,为一些需要跟据实际数据改变的字段留出位置。
  4. fIsFirstPacket = isFirstPacket;
  5.  
  6. // Set up the RTP header:
  7. unsigned rtpHdr = 0x80000000; // RTP version 2; marker ('M') bit not set (by default; it can be set later)
  8. rtpHdr |= (fRTPPayloadType << );
  9. rtpHdr |= fSeqNo; // sequence number
  10. fOutBuf->enqueueWord(rtpHdr);//向包中加入一个字
  11.  
  12. // Note where the RTP timestamp will go.
  13. // (We can't fill this in until we start packing payload frames.)
  14. fTimestampPosition = fOutBuf->curPacketSize();
  15. fOutBuf->skipBytes(); // leave a hole for the timestamp 在缓冲中空出时间戳的位置
  16.  
  17. fOutBuf->enqueueWord(SSRC());
  18.  
  19. // Allow for a special, payload-format-specific header following the
  20. // RTP header:
  21. fSpecialHeaderPosition = fOutBuf->curPacketSize();
  22. fSpecialHeaderSize = specialHeaderSize();
  23. fOutBuf->skipBytes(fSpecialHeaderSize);
  24.  
  25. // Begin packing as many (complete) frames into the packet as we can:
  26. fTotalFrameSpecificHeaderSizes = ;
  27. fNoFramesLeft = False;
  28. fNumFramesUsedSoFar = ; // 一个包中已打入的帧数。
  29. //头准备好了,再打包帧数据
  30. packFrame();
  31. }

  继续看packFrame():

  1. void MultiFramedRTPSink::packFrame()
  2. {
  3. // First, see if we have an overflow frame that was too big for the last pkt
  4. if (fOutBuf->haveOverflowData()) {
  5. //如果有帧数据,则使用之。OverflowData是指上次打包时剩下的帧数据,因为一个包可能容纳不了一个帧。
  6. // Use this frame before reading a new one from the source
  7. unsigned frameSize = fOutBuf->overflowDataSize();
  8. struct timeval presentationTime = fOutBuf->overflowPresentationTime();
  9. unsigned durationInMicroseconds =fOutBuf->overflowDurationInMicroseconds();
  10. fOutBuf->useOverflowData();
  11.  
  12. afterGettingFrame1(frameSize, , presentationTime,durationInMicroseconds);
  13. } else {
  14. //一点帧数据都没有,跟source要吧。
  15. // Normal case: we need to read a new frame from the source
  16. if (fSource == NULL)
  17. return;
  18.  
  19. //更新缓冲中的一些位置
  20. fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();
  21. fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();
  22. fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);
  23. fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;
  24.  
  25. //从source获取下一帧
  26. fSource->getNextFrame(fOutBuf->curPtr(),//新数据存放开始的位置
  27. fOutBuf->totalBytesAvailable(),//缓冲中空余的空间大小
  28. afterGettingFrame, //因为可能source中的读数据函数会被放在任务调度中,所以把获取帧后应调用的函数传给source
  29. this,
  30. ourHandleClosure, //这个是source结束时(比如文件读完了)要调用的函数。
  31. this);
  32. }
  33. }

  fSource定义在MediaSink类中,在这个类中startPlaying()函数中,给fSource赋值为传入的参数sourceVideo,sourceVideo实现getNextFrame()函数在FramedSource中,这是一个虚函数:

  1. void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
  2. afterGettingFunc* afterGettingFunc,
  3. void* afterGettingClientData,
  4. onCloseFunc* onCloseFunc,
  5. void* onCloseClientData)
  6. {
  7. // Make sure we're not already being read:
  8. if (fIsCurrentlyAwaitingData) {
  9. envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
  10. exit();
  11. }
  12.  
  13. fTo = to;
  14. fMaxSize = maxSize;
  15. fNumTruncatedBytes = ; // by default; could be changed by doGetNextFrame()
  16. fDurationInMicroseconds = ; // by default; could be changed by doGetNextFrame()
  17. fAfterGettingFunc = afterGettingFunc;
  18. fAfterGettingClientData = afterGettingClientData;
  19. fOnCloseFunc = onCloseFunc;
  20. fOnCloseClientData = onCloseClientData;
  21. fIsCurrentlyAwaitingData = True;
  22.  
  23. doGetNextFrame();
  24. }

  sourceVideo通过实现H264VideoStreamFramer::createNew()实例化,发现doGetNextFrame()函数实现在H264VideoStreamFramer类中:

  1. void H264VideoStreamFramer::doGetNextFrame()
  2. {
  3.  
  4. //fParser->registerReadInterest(fTo, fMaxSize);
  5. //continueReadProcessing();
  6. fInputSource->getNextFrame(fTo, fMaxSize,
  7. afterGettingFrame, this,
  8. FramedSource::handleClosure, this);
  9. }

  这fInputSource在H264VideoStreamFramer的基类StreamParser中被初始化为传入的参数H264InputDevice[video_type]->videoSource(),VideoOpenFileSource类继承OpenFileSource类,因此这个doGetNextFrame再一次FramedSource类中的getNextFrame()函数,这次getNextFrame函数中调用的doGetNextFrame()函数则是在OpenFileSource类实现的:

  1. void OpenFileSource::incomingDataHandler1() {
  2. int ret;
  3.  
  4. if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet
  5.  
  6. ret = readFromFile();
  7. if (ret < ) {
  8. handleClosure(this);
  9. fprintf(stderr,"In Grab Image, the source stops being readable!!!!\n");
  10. }
  11. else if (ret == )
  12. {
  13. if( uSecsToDelay >= uSecsToDelayMax )
  14. {
  15. uSecsToDelay = uSecsToDelayMax;
  16. }else{
  17. uSecsToDelay *= ;
  18. }
  19. nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay, (TaskFunc*)incomingDataHandler, this);
  20. }
  21. else {
  22. nextTask() = envir().taskScheduler().scheduleDelayedTask(, (TaskFunc*)afterGetting, this);
  23. }
  24. }

  获取一帧数据后,执行延迟队列中的afterGetting()函数,此函数实现父类FramedSource中:

  1. void FramedSource::afterGetting(FramedSource* source)
  2. {
  3. source->fIsCurrentlyAwaitingData = False;
  4. // indicates that we can be read again
  5. // Note that this needs to be done here, in case the "fAfterFunc"
  6. // called below tries to read another frame (which it usually will)
  7.  
  8. if (source->fAfterGettingFunc != NULL) {
  9. (*(source->fAfterGettingFunc))(source->fAfterGettingClientData,
  10. source->fFrameSize,
  11. source->fNumTruncatedBytes,
  12. source->fPresentationTime,
  13. source->fDurationInMicroseconds);
  14. }
  15. }

  fAfterGettingFunc函数指针在getNextFrame()函数被赋值,在MultiFramedRTPSink::packFrame() 函数中,被赋值MultiFramedRTPSink::afterGettingFrame():

  1. void MultiFramedRTPSink::afterGettingFrame(void* clientData, unsigned numBytesRead,
  2. unsigned numTruncatedBytes,
  3. struct timeval presentationTime,
  4. unsigned durationInMicroseconds)
  5. {
  6. MultiFramedRTPSink* sink = (MultiFramedRTPSink*)clientData;
  7. sink->afterGettingFrame1(numBytesRead, numTruncatedBytes,
  8. presentationTime, durationInMicroseconds);
  9. }

  继续看afterGettingFrame1实现:

  1. void MultiFramedRTPSink::afterGettingFrame1(
  2. unsigned frameSize,
  3. unsigned numTruncatedBytes,
  4. struct timeval presentationTime,
  5. unsigned durationInMicroseconds)
  6. {
  7. if (fIsFirstPacket) {
  8. // Record the fact that we're starting to play now:
  9. gettimeofday(&fNextSendTime, NULL);
  10. }
  11.  
  12. //如果给予一帧的缓冲不够大,就会发生截断一帧数据的现象。但也只能提示一下用户
  13. if (numTruncatedBytes > ) {
  14.  
  15. unsigned const bufferSize = fOutBuf->totalBytesAvailable();
  16. envir()
  17. << "MultiFramedRTPSink::afterGettingFrame1(): The input frame data was too large for our buffer size ("
  18. << bufferSize
  19. << "). "
  20. << numTruncatedBytes
  21. << " bytes of trailing data was dropped! Correct this by increasing \"OutPacketBuffer::maxSize\" to at least "
  22. << OutPacketBuffer::maxSize + numTruncatedBytes
  23. << ", *before* creating this 'RTPSink'. (Current value is "
  24. << OutPacketBuffer::maxSize << ".)\n";
  25. }
  26. unsigned curFragmentationOffset = fCurFragmentationOffset;
  27. unsigned numFrameBytesToUse = frameSize;
  28. unsigned overflowBytes = ;
  29.  
  30. //如果包只已经打入帧数据了,并且不能再向这个包中加数据了,则把新获得的帧数据保存下来。
  31. // If we have already packed one or more frames into this packet,
  32. // check whether this new frame is eligible to be packed after them.
  33. // (This is independent of whether the packet has enough room for this
  34. // new frame; that check comes later.)
  35. if (fNumFramesUsedSoFar > ) {
  36. //如果包中已有了一个帧,并且不允许再打入新的帧了,则只记录下新的帧。
  37. if ((fPreviousFrameEndedFragmentation && !allowOtherFramesAfterLastFragment())
  38. || !frameCanAppearAfterPacketStart(fOutBuf->curPtr(), frameSize))
  39. {
  40. // Save away this frame for next time:
  41. numFrameBytesToUse = ;
  42. fOutBuf->setOverflowData(fOutBuf->curPacketSize(), frameSize,
  43. presentationTime, durationInMicroseconds);
  44. }
  45. }
  46.  
  47. //表示当前打入的是否是上一个帧的最后一块数据。
  48. fPreviousFrameEndedFragmentation = False;
  49.  
  50. //下面是计算获取的帧中有多少数据可以打到当前包中,剩下的数据就作为overflow数据保存下来。
  51. if (numFrameBytesToUse > ) {
  52. // Check whether this frame overflows the packet
  53. if (fOutBuf->wouldOverflow(frameSize)) {
  54. // Don't use this frame now; instead, save it as overflow data, and
  55. // send it in the next packet instead. However, if the frame is too
  56. // big to fit in a packet by itself, then we need to fragment it (and
  57. // use some of it in this packet, if the payload format permits this.)
  58. if (isTooBigForAPacket(frameSize)
  59. && (fNumFramesUsedSoFar == || allowFragmentationAfterStart())) {
  60. // We need to fragment this frame, and use some of it now:
  61. overflowBytes = computeOverflowForNewFrame(frameSize);
  62. numFrameBytesToUse -= overflowBytes;
  63. fCurFragmentationOffset += numFrameBytesToUse;
  64. } else {
  65. // We don't use any of this frame now:
  66. overflowBytes = frameSize;
  67. numFrameBytesToUse = ;
  68. }
  69. fOutBuf->setOverflowData(fOutBuf->curPacketSize() + numFrameBytesToUse,
  70. overflowBytes, presentationTime, durationInMicroseconds);
  71. } else if (fCurFragmentationOffset > ) {
  72. // This is the last fragment of a frame that was fragmented over
  73. // more than one packet. Do any special handling for this case:
  74. fCurFragmentationOffset = ;
  75. fPreviousFrameEndedFragmentation = True;
  76. }
  77. }
  78.  
  79. if (numFrameBytesToUse == && frameSize > ) {
  80. //如果包中有数据并且没有新数据了,则发送之。(这种情况好像很难发生啊!)
  81. // Send our packet now, because we have filled it up:
  82. sendPacketIfNecessary();
  83. } else {
  84. //需要向包中打入数据。
  85.  
  86. // Use this frame in our outgoing packet:
  87. unsigned char* frameStart = fOutBuf->curPtr();
  88. fOutBuf->increment(numFrameBytesToUse);
  89. // do this now, in case "doSpecialFrameHandling()" calls "setFramePadding()" to append padding bytes
  90.  
  91. // Here's where any payload format specific processing gets done:
  92. doSpecialFrameHandling(curFragmentationOffset, frameStart,
  93. numFrameBytesToUse, presentationTime, overflowBytes);
  94.  
  95. ++fNumFramesUsedSoFar;
  96.  
  97. // Update the time at which the next packet should be sent, based
  98. // on the duration of the frame that we just packed into it.
  99. // However, if this frame has overflow data remaining, then don't
  100. // count its duration yet.
  101. if (overflowBytes == ) {
  102. fNextSendTime.tv_usec += durationInMicroseconds;
  103. fNextSendTime.tv_sec += fNextSendTime.tv_usec / ;
  104. fNextSendTime.tv_usec %= ;
  105. }
  106.  
  107. //如果需要,就发出包,否则继续打入数据。
  108. // Send our packet now if (i) it's already at our preferred size, or
  109. // (ii) (heuristic) another frame of the same size as the one we just
  110. // read would overflow the packet, or
  111. // (iii) it contains the last fragment of a fragmented frame, and we
  112. // don't allow anything else to follow this or
  113. // (iv) one frame per packet is allowed:
  114. if (fOutBuf->isPreferredSize()
  115. || fOutBuf->wouldOverflow(numFrameBytesToUse)
  116. || (fPreviousFrameEndedFragmentation
  117. && !allowOtherFramesAfterLastFragment())
  118. || !frameCanAppearAfterPacketStart(
  119. fOutBuf->curPtr() - frameSize, frameSize)) {
  120. // The packet is ready to be sent now
  121. sendPacketIfNecessary();
  122. } else {
  123. // There's room for more frames; try getting another:
  124. packFrame();
  125. }
  126. }
  127. }

看一下发送数据的函数:

  1. void MultiFramedRTPSink::sendPacketIfNecessary()
  2. {
  3. //发送包
  4. if (fNumFramesUsedSoFar > ) {
  5. // Send the packet:
  6. #ifdef TEST_LOSS
  7. if ((our_random()%) != ) // simulate 10% packet loss #####
  8. #endif
  9. if (!fRTPInterface.sendPacket(fOutBuf->packet(),fOutBuf->curPacketSize())) {
  10. // if failure handler has been specified, call it
  11. if (fOnSendErrorFunc != NULL)
  12. (*fOnSendErrorFunc)(fOnSendErrorData);
  13. }
  14. ++fPacketCount;
  15. fTotalOctetCount += fOutBuf->curPacketSize();
  16. fOctetCount += fOutBuf->curPacketSize() - rtpHeaderSize
  17. - fSpecialHeaderSize - fTotalFrameSpecificHeaderSizes;
  18.  
  19. ++fSeqNo; // for next time
  20. }
  21.  
  22. //如果还有剩余数据,则调整缓冲区
  23. if (fOutBuf->haveOverflowData()
  24. && fOutBuf->totalBytesAvailable() > fOutBuf->totalBufferSize() / ) {
  25. // Efficiency hack: Reset the packet start pointer to just in front of
  26. // the overflow data (allowing for the RTP header and special headers),
  27. // so that we probably don't have to "memmove()" the overflow data
  28. // into place when building the next packet:
  29. unsigned newPacketStart = fOutBuf->curPacketSize()-
  30. (rtpHeaderSize + fSpecialHeaderSize + frameSpecificHeaderSize());
  31. fOutBuf->adjustPacketStart(newPacketStart);
  32. } else {
  33. // Normal case: Reset the packet start pointer back to the start:
  34. fOutBuf->resetPacketStart();
  35. }
  36. fOutBuf->resetOffset();
  37. fNumFramesUsedSoFar = ;
  38.  
  39. if (fNoFramesLeft) {
  40. //如果再没有数据了,则结束之
  41. // We're done:
  42. onSourceClosure(this);
  43. } else {
  44. //如果还有数据,则在下一次需要发送的时间再次打包发送。
  45. // We have more frames left to send. Figure out when the next frame
  46. // is due to start playing, then make sure that we wait this long before
  47. // sending the next packet.
  48. struct timeval timeNow;
  49. gettimeofday(&timeNow, NULL);
  50. int secsDiff = fNextSendTime.tv_sec - timeNow.tv_sec;
  51. int64_t uSecondsToGo = secsDiff *
  52. + (fNextSendTime.tv_usec - timeNow.tv_usec);
  53. if (uSecondsToGo < || secsDiff < ) { // sanity check: Make sure that the time-to-delay is non-negative:
  54. uSecondsToGo = ;
  55. }
  56.  
  57. // Delay this amount of time:
  58. nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo,
  59. (TaskFunc*) sendNext, this);
  60. }
  61. }

  当一帧数据发送完,在doEventLoop()函数执行任务函数sendNext(),继续发送一包,进行下一个循环。音频数据的发送也是如此。

总结一下调用过程(参考牛搞大神):

  1.   单播数据发送:
      单播的时候,只有收到客服端的“PLAY”的命令时,才开始发送数据,在RTSPClientSession类中handleCmd_PLAY()函数中调用
  1. void RTSPServer::RTSPClientSession
  2. ::handleCmd_PLAY(ServerMediaSubsession* subsession, char const* cseq,
  3.           char const* fullRequestStr)
  4. {
  5.  
  6.   ... ...
  7.  
  8.     fStreamStates[i].subsession->startStream(fOurSessionId,
  9. fStreamStates[i].streamToken,
  10. (TaskFunc*)noteClientLiveness,
  11. this,
  12. rtpSeqNum,
  13. rtpTimestamp);
  14.    ... ...
  15. }
  1.   startStream()函数定义在OnDemandServerMediaSubsession类中:
  1. void OnDemandServerMediaSubsession::startStream(unsigned clientSessionId,
  2. void* streamToken,
  3. TaskFunc* rtcpRRHandler,
  4. void* rtcpRRHandlerClientData,
  5. unsigned short& rtpSeqNum,
  6. unsigned& rtpTimestamp)
    {
  7.   StreamState* streamState = (StreamState*)streamToken;
  8.   Destinations* destinations = (Destinations*)(fDestinationsHashTable->Lookup((char const*)clientSessionId));
  9.   if (streamState != NULL) {
  10.     streamState->startPlaying(destinations, rtcpRRHandler, rtcpRRHandlerClientData);
  11.     if (streamState->rtpSink() != NULL) {
  12.       rtpSeqNum = streamState->rtpSink()->currentSeqNo();
  13.       rtpTimestamp = streamState->rtpSink()->presetNextTimestamp();
  14. }
  15. }
  16. }

  startPlaying函数实现在StreamState类中:

  1. void StreamState::startPlaying(Destinations* dests,
  2. TaskFunc* rtcpRRHandler, void* rtcpRRHandlerClientData)
  3. {
  4. if (dests == NULL) return;
  5.  
  6. if (!fAreCurrentlyPlaying && fMediaSource != NULL) {
  7. if (fRTPSink != NULL) {
  8. fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);
  9. fAreCurrentlyPlaying = True;
  10. } else if (fUDPSink != NULL) {
  11. fUDPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);
  12. fAreCurrentlyPlaying = True;
  13. }
  14. }
  15.  
  16. if (fRTCPInstance == NULL && fRTPSink != NULL) {
  17. // Create (and start) a 'RTCP instance' for this RTP sink:
  18. fRTCPInstance = RTCPInstance::createNew(fRTPSink->envir(), fRTCPgs,
  19. fTotalBW, (unsigned char*)fMaster.fCNAME,
  20. fRTPSink, NULL /* we're a server */);
  21. // Note: This starts RTCP running automatically
  22. }
  23.  
  24. if (dests->isTCP) {
  25. // Change RTP and RTCP to use the TCP socket instead of UDP:
  26. if (fRTPSink != NULL) {
  27. fRTPSink->addStreamSocket(dests->tcpSocketNum, dests->rtpChannelId);
  28. }
  29. if (fRTCPInstance != NULL) {
  30. fRTCPInstance->addStreamSocket(dests->tcpSocketNum, dests->rtcpChannelId);
  31. fRTCPInstance->setSpecificRRHandler(dests->tcpSocketNum, dests->rtcpChannelId,
  32. rtcpRRHandler, rtcpRRHandlerClientData);
  33. }
  34. } else {
  35. // Tell the RTP and RTCP 'groupsocks' about this destination
  36. // (in case they don't already have it):
  37. if (fRTPgs != NULL) fRTPgs->addDestination(dests->addr, dests->rtpPort);
  38. if (fRTCPgs != NULL) fRTCPgs->addDestination(dests->addr, dests->rtcpPort);
  39. if (fRTCPInstance != NULL) {
  40. fRTCPInstance->setSpecificRRHandler(dests->addr.s_addr, dests->rtcpPort,
  41. rtcpRRHandler, rtcpRRHandlerClientData);
  42. }
  43. }
  44. }

  这个函数就会去调用RTPSink类中的startPlaying()函数,但是RTPSink没有实现,直接调用父类MediaSink中的startPlaying函数。后面就跟组播一样的采集,组包,发送数据了。

  1.  

Live555 分析(二):服务端的更多相关文章

  1. zookeeper源码分析之五服务端(集群leader)处理请求流程

    leader的实现类为LeaderZooKeeperServer,它间接继承自标准ZookeeperServer.它规定了请求到达leader时需要经历的路径: PrepRequestProcesso ...

  2. zookeeper源码分析之四服务端(单机)处理请求流程

    上文: zookeeper源码分析之一服务端启动过程 中,我们介绍了zookeeper服务器的启动过程,其中单机是ZookeeperServer启动,集群使用QuorumPeer启动,那么这次我们分析 ...

  3. Netty源码分析之服务端启动过程

    一.首先来看一段服务端的示例代码: public class NettyTestServer { public void bind(int port) throws Exception{ EventL ...

  4. MVC系列学习(十二)-服务端的验证

    在前一讲,提到过,客户端的东西永远可以造假,所以我们还要在服务端进行验证 注意:先加载表单,后添加js文件,才能有效:而先加载js,后添加表单,是没有效果的 1.视图与Model中的代码如下 2.一张 ...

  5. muduo库源码剖析(二) 服务端

    一. TcpServer类: 管理所有的TCP客户连接,TcpServer供用户直接使用,生命期由用户直接控制.用户只需设置好相应的回调函数(如消息处理messageCallback)然后TcpSer ...

  6. 4. 源码分析---SOFARPC服务端暴露

    服务端的示例 我们首先贴上我们的服务端的示例: public static void main(String[] args) { ServerConfig serverConfig = new Ser ...

  7. Photon Server 实现注册与登录(二) --- 服务端代码整理

    一.有的代码前端和后端都会用到.比如一些请求的Code.使用需要新建项目存放公共代码. 新建项目Common存放公共代码: EventCode :存放服务端自动发送信息给客户端的code Operat ...

  8. zookeeper源码分析之一服务端启动过程

    zookeeper简介 zookeeper是为分布式应用提供分布式协作服务的开源软件.它提供了一组简单的原子操作,分布式应用可以基于这些原子操作来实现更高层次的同步服务,配置维护,组管理和命名.zoo ...

  9. TeamTalk源码分析之服务端描述

    TTServer(TeamTalk服务器端)主要包含了以下几种服务器: LoginServer (C++): 登录服务器,分配一个负载小的MsgServer给客户端使用 MsgServer (C++) ...

  10. Netty源码分析之服务端启动

    Netty服务端启动代码: public final class EchoServer { static final int PORT = Integer.parseInt(System.getPro ...

随机推荐

  1. python3-day4(yield)

    1.yield 迭代器是访问集合元素的一种方式.迭代器对象从集合的第一个元素开始访问,直到所有的元素被访问完结束.迭代器只能往前不会后退,不过这也没什么,因为人们很少在迭代途中往后退.另外,迭代器的一 ...

  2. 用消息在Win32控制台程序多线程间进行通讯

      #include <stdio.h> #include <windows.h> //#include <iostream> //#include <pro ...

  3. Hacker(18)----了解Windows系统漏洞

    一.WinXP中的漏洞 在WinXP中,常见的漏洞主要有UPNP服务漏洞.帮助与支持中心漏洞.压缩文件夹漏洞.服务拒绝漏洞.RDP漏洞以及热键漏洞. 1.UPNP服务漏洞 漏洞描述:UPNP(Univ ...

  4. jQuery制作焦点图(轮播图)

    焦点图(轮播图) 案例 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http:/ ...

  5. thinking in java知识小记(一)

    知识点一(javadoc): 使用javadoc时特别注意选择encoding和charset为utf-8,要不然生成的javadoc会是乱码,命令:javadoc -encoding utf-8 - ...

  6. python - 类的字段

    一.静态字段:保存在类里面 1.创建静态字段: class Foo: CC = 123 # 字段(静态字段),保存在类里 def __init__(self): self.name = 'alex' ...

  7. Material Calendar View 学习记录(二)

    Material Calendar View 学习记录(二) github link: material-calendarview; 在学习记录一中简单翻译了该开源项目的README.md文档.接下来 ...

  8. web 安全 初探 (正在更新)

    1.web应用程序所采用的防卫机制的几个核心构成:1.处理用户对应用程序的数据和功能的访问,以防止用户未经授权访问.2.处理用户的输入,以防止恶意的输入导致未预期的行为.3.处理攻击,以确保应用程序在 ...

  9. C++ BitArray 引用计数实现

    #ifndef __BITARRAY__ //数组不支持多线程同时访问,没有对引用计数以及分配的资源做任何锁处理 #define __BITARRAY__ 1 //越界访问修改为抛出异常 #ifdef ...

  10. Oracle11.2.0.4 RAC安装文档

    1 环境配置 参考官方文档<Grid Infrastructure Installation Guide for Linux> 1.1 软件环境 操作系统: [root@howe1 ~]# ...