https://www.mail-archive.com/live-devel@lists.live555.com/msg05506.html
-----ask--------------------------------
Hi,
We are trying to stream from a live source with Live555. 

We implement our own DeviceSource class. In this class we implement
doGetNextFrame in the following (logic) way. We remove all the unnecessary
implementation details so you can see the idea If no frame is available do the following
nextTask() =
envir().taskScheduler().scheduleDelayedTask(30000,(TaskFunc*)nextTime,
this); If a frame is available do the following
If (fFrameSize < fMaxSize)
{
memcpy(fTo, Buffer_getUserPtr(hEncBuf) ,fFrameSize); // copy the frame to
Live555
nextTask() =
envir().taskScheduler().scheduleDelayedTask(0,(TaskFunc*)FramedSource::after
Getting, this);
}
else
{
What should we do? (We do not understand what should we do in this option)
} As you can see we would like to feed the Live555 frame by frame from the
live source. However, after some calls of the function doGetNextFrame the
fMaxSize is smaller than fFrameSize and the application is in deadlock
state. We do not understand what should we do in order to eliminate this state. We can give part of a frame to Live555 but then it means that we are not
going to feed the Live555 library in frame by frame scenario. (We can build
a byte buffer between the live source and live555 but we do not sure it is
the right way) Please let us know what is the preferred way of handing this issue Thanks,
Sagi

-----ans--------------------------------
This should be "<=", not "<".
Also, I hope you are setting "fFrameSize" properly before you get to this "if" statement. 
You can probably replace this last statement with:
FramedSource::afterGetting(this);
which is more efficient (and will avoid infinite recursion, because you're reading from a live source).

-----ask--------------------------------
Hi Ross, 

We are setting fFrameSize to the size of the frame before the posted code.
I am familiar with fNumTruncatedBytes but as you say the data will be
dropped. We do not want this to happen.
I did not sure I understand your last statement "make sure that your
downstream object always has enough buffer space to avoid trunction - i.e.,
so that fMaxSize is always >= fFrameSize". How can I assure it, the Live555
library request 150,000 bytes exactly. We give it frame by frame and on the
last frame it is not the exact number so we are in the situation of fMaxSize
< fFrameSize. If I understand you correctly we have two options 1. Feeding Live555 frame by frame and on the last frame truncate the frame
and loss the data
2. Handle internal buffer inside our DeviceSource in order to give Live555
parts of a frame on the last frame. It means that Live555 will handle the
recognition of Frames and on this scenario I do not understand what should
be the fPresentationTime because we are sending only part of a frame to the
Live555 library and on the next call we will send the following part of the
frame. What is the preferred way of action? Thanks,
Sagi -----ans--------------------------------
This is true only for the "StreamParser" class, which you should *not* be using, because you are delivering discrete frames - rather than a byte stream - to your downstream object. In particular, you should be using a "*DiscreteFramer" object downstream, and not a "*Framer". What objects (classes) do you have 'downstream' from your input device, and what type of data (i.e., what codec) is your "DeviceSource" object trying to deliver? (This may help identify the problem.) -----ask--------------------------------
Hi Ross, 

Ok, we used the StreamParser class and probably this cause the problem we
have.
This is our Device class 

class CapDeviceSource: public FramedSource {

We are trying to stream MPEG4 (Later on we will move to H.264) 

What is the best class to derive from instead of FramedSource in order to
use DiscreteFramer downstream object? If I understood you correctly it is MPEG4VideoStreamDiscreteFramer and we
should implement the function doGetNextFunction but looking on the code we
thought it is best to implement the function afterGettingFrame1, yet it is
not virtual so probably we are missing something. Thanks,
Sagi -----ans--------------------------------
Provided that your source object delivers one frame at a time, you should be able to feed it directly into a "MPEG4VideoStreamDiscreteFramer", with no modifications.
No, there's nothing more for you to implement; just use "MPEG4VideoStreamDiscreteFramer" as is. (For H.264, however, it'll be a bit more complicated; you will need to implement your own subclass of "H264VideoStreamFramer" for that.) -----ask--------------------------------
Hi Ross, 

Thanks for the hint, we understood our problem. We used
MPEG4VideoStreamFramer instead of MPEG4VideoStreamDiscreteFramer. We changed
this and now it looks much better.
Again, thank you very much for your great support and library. 

For the next stage we would like to use H264 codec, so I think we should
write our own H264VideoStreamDiscreteFramer, is it correct? Thanks,
Sagi -----ans--------------------------------
Yes, you need to write your own subclass of "H264VideoStreamFramer"; see http://www.live555.com/liveMedia/faq.html#h264-streaming -----ask--------------------------------
Hi Ross, 

We are checking for audio stream support with Live555 and we would like to
know if we can stream the following codec
AAC-LC and/or AAC-HE through the library.
Thanks,
Sagi -----ans--------------------------------
Yes, you can do so using a "MPEG4GenericRTPSink", created with appropriate parameters to specify AAC audio. (Note, for example, how "ADTSAudioFileServerMediaSubsession" streams AAC audio that comes from an ADTS-format file.) -----ask--------------------------------
Hi Ross,

We have implemented a stream for AAC audio and it works great, we also
implement a stream for H.264 and it also works great. We would like to
combine these two streams under one name.
Currently, we have one stream called h264Video and another stream called
aacAudio (Different streams, DESCRIBE). We would like to have one stream
called audioVideo which configure two setups 1 for the video and 1 for the
audio.
Can you please let us know what is the best way to implement it?
Thanks,
Sagi -----ask--------------------------------
Hi Ross, 

We successfully combined the two streams into one stream and it works great.
The Audio and Video are on the same url address. As it seems to us the Audio
and Video are synchronized but we are not sure if we need to handle it (in
some way other then setting presentation time) or it all handle in your
library. The only thing we are currently doing is to update presentation
time for the audio and for the video. We appreciate your input on this
matter
Thanks,
Sagi -----ans--------------------------------
Good. As you figured out, you can do this just by creating a single "ServerMediaSession" object, and adding two separate "ServerMediaSubsessions" to it.
Yes, if the presentation times of the two streams are in sync, and aligned with 'wall clock' time (i.e., the time that you'd get by calling "gettimeofday()"), and you are using RTCP (which is implemented by default in "OnDemandServerMediaSubsession"), then you will see A/V synchronization in standards-compliant clients. -----ask--------------------------------
how is the presentationtime of two streams synchronised?
I have to synchronise the mpeg-4 es and a wave file. I am able to send the two
streams together by creating single servermediasession and adding two separate
servermediasubsession, but they are not synchronised.
In case of mpeg-4 es video, the gettimeofday() is getting called when the
constructor of MPEGVideoStreamFramer is called and in case of wave, in
WAVAudioFileSource::doGetNextFrame(). I think due to this the video and audio
is not getting synchronised. So in this case how should i synchronise the audio
and video?
Regards,
Nisha -----ans--------------------------------
how is the presentationtime of two streams synchronised?
Please read the FAQ!
You *must* set accurate "fPresentationTime" values for each frame of each of your sources. These values - and only these values - are what are used for synchronization. If the "fPresentationTime" values are not accurate - and synchronized - at the server, then they cannot possibly become synchronized at a client.

Live555 Streaming from a live source的更多相关文章

  1. [流媒体]live555简介(转)

    live555简介 Live555 是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输协议如RTP/RTCP.RTSP.SIP等的支持.Live555实现 了对多种音视频编码 ...

  2. Darwin Streaming Server 6.0.3安装、订制、插件或模块

    How to setup Darwin Streaming Server 6.0.3 on 32 or 64 bit Linux platforms, add custom functionality ...

  3. live555

    相关资料: Live555 是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输协议如RTP/RTCP.RTSP.SIP等的支持.Live555实现 了对多种音视频编码格式的音 ...

  4. Live555 分析(三):客服端

    live555的客服端流程:建立任务计划对象--建立环境对象--处理用户输入的参数(RTSP地址)--创建RTSPClient实例--发出DESCRIBE--发出SETUP--发出PLAY--进入Lo ...

  5. live555源码分析

    live555源代码下载(VC6工程):http://download.csdn.net/detail/leixiaohua1020/6374387 liveMedia 项目(http://www.l ...

  6. live555源代码分析

    live555源代码下载(VC6工程):http://download.csdn.net/detail/leixiaohua1020/6374387 liveMedia 项目(http://www.l ...

  7. 多媒体开发之---live555 分析客户端

    live555的客服端流程:建立任务计划对象--建立环境对象--处理用户输入的参数(RTSP地址)--创建RTSPClient实例--发出DESCRIBE--发出SETUP--发出PLAY--进入Lo ...

  8. live555流媒体框架介绍

    LIVE555 Streaming Media This code forms a set of C++ libraries for multimedia streaming, using open ...

  9. live555的使用(转载)

    Live555 是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输协议如RTP/RTCP.RTSP.SIP等的支持.Live555实现 了对多种音视频编码格式的音视频数据的流 ...

随机推荐

  1. SQL Server T-SQL高级查询【转】

    高级查询在数据库中用得是最频繁的,也是应用最广泛的. Ø 基本常用查询 --select select * from student;   --all 查询所有 select all sex from ...

  2. CocoaPods详解之----进阶篇

    作者:wangzz原文地址:http://blog.csdn.net/wzzvictory/article/details/19178709转载请注明出处如果觉得文章对你有所帮助,请通过留言或关注微信 ...

  3. swift 关于闭包和函数

    调用函数,有闭包参数时: 函数的实现中:闭包为参数时,有参数返回值类型: 调用闭包时,传入参数 调用函数时:闭包为参数,是闭包的实现,当闭包为最后一个参数时,可写在参数括号外面 即===>函数在 ...

  4. .NET 互操作

    首先推荐一本书<精通.NET 互操作> ,这本书是目前中文资料里讲 互操作最详尽的书了. 做系统集成项目的同学应该都和设备打过交道(如视频设备:海康.大华等),在大多数情况下这些设备厂商会 ...

  5. 《C++ Primer》P314中使用insert重写单词统计程序的扩展

    编写程序统计并输出所读入的单词出现的次数 想与习题10-1相结合,也就是先输入几组 map<string, int>类型,存入vector中. 再输入单词word,如果已经存在则在key对 ...

  6. ACM HDU 2044 一只小蜜蜂

    Problem Description 有一只经过训练的蜜蜂只能爬向右侧相邻的蜂房,不能反向爬行.请编程计算蜜蜂从蜂房a爬到蜂房b的可能路线数. 其中,蜂房的结构如下所示. Input 输入数据的第一 ...

  7. 将图片文件转换为.py文件

    最近用wxpython写了一个脚本,其中要给窗体设置图标文件,需要单独的一个ico文件,这样就比较影响美观,另外打包的时候还要将图标文件一起打包很繁琐.这时候看到wxpython文件有一个工具img2 ...

  8. 对c++服务端进行覆盖率统计

    (1)首先需要为每个被测程序的所有编译文件增加选项,如果文件太多,这无疑是灾难,可利用spec文件达到目的 sed -i '$ a\export LD_PRELOAD=/usr/local/bin/c ...

  9. smali 语法之if语句

    # virtual methods .method public onClick(Landroid/view/View;)V .locals 3 .parameter "v" .p ...

  10. jq总结

    总述 jQuery 框架提供了很多方法,但大致上可以分为3 大类: 获取jQuery 对象的方法 在jQuery 对象间跳转的方法 获取jQuery 对象后调用的方法 获取 jQuery 对象 是怎样 ...