http://stackoverflow.com/questions/27279161/using-live555-to-stream-live-video-from-an-ip-camera-connected-to-an-h264-encode

I am using a custom Texas Instruments OMAP-L138 based board that basically consists of an ARM9 based SoC and a DSP processor. It is connected to a camera lens. What I'm trying to do is to capture live video stream which is sent to the dsp processor for H264 encoding which is sent over uPP in packets of 8192 bytes. I want to use the testH264VideoStreamer supplied by Live555 to live stream the H264 encoded video over RTSP. The code I have modified is shown below:

#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h> //to allow read() function UsageEnvironment* env;
H264VideoStreamFramer* videoSource;
RTPSink* videoSink; //-------------------------------------------------------------------------------
/* Open File Descriptor*/
int stream = open("/dev/upp", O_RDONLY);
/* Declaring a static 8 bit unsigned integer of size 8192 bytes that keeps its value between invocations */
static uint8_t buf[];
//------------------------------------------------------------------------------ //------------------------------------------------------------------------------
// Execute play function as a forwarding mechanism
//------------------------------------------------------------------------------
void play(); // forward //------------------------------------------------------------------------------
// MAIN FUNCTION / ENTRY POINT
//------------------------------------------------------------------------------
int main(int argc, char** argv)
{
// Begin by setting up our live555 usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler); // Create 'groupsocks' for RTP and RTCP:
struct in_addr destinationAddress;
destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
// Note: This is a multicast address. If you wish instead to stream
// using unicast, then you should use the "testOnDemandRTSPServer"
// test program - not this test program - as a model. const unsigned short rtpPortNum = ;
const unsigned short rtcpPortNum = rtpPortNum+;
const unsigned char ttl = ; const Port rtpPort(rtpPortNum);
const Port rtcpPort(rtcpPortNum); Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
rtpGroupsock.multicastSendOnly(); // we're a SSM source
Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
rtcpGroupsock.multicastSendOnly(); // we're a SSM source // Create a 'H264 Video RTP' sink from the RTP 'groupsock':
OutPacketBuffer::maxSize = ;
videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, ); // Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidth = ; // in kbps; for RTCP b/w share
const unsigned maxCNAMElen = ;
unsigned char CNAME[maxCNAMElen+];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
RTCPInstance* rtcp
= RTCPInstance::createNew(*env, &rtcpGroupsock,
estimatedSessionBandwidth, CNAME,
videoSink, NULL /* we're a server */,
True /* we're a SSM source */);
// Note: This starts RTCP running automatically /*Create RTSP SERVER*/
RTSPServer* rtspServer = RTSPServer::createNew(*env, );
if (rtspServer == NULL)
{
*env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
exit();
}
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, "IPCAM @ TeReSol","UPP Buffer" ,
"Session streamed by \"testH264VideoStreamer\"",
True /*SSM*/);
sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
rtspServer->addServerMediaSession(sms); char* url = rtspServer->rtspURL(sms);
*env << "Play this stream using the URL \"" << url << "\"\n";
delete[] url; // Start the streaming:
*env << "Beginning streaming...\n";
play(); env->taskScheduler().doEventLoop(); // does not return return ; // only to prevent compiler warning
} //----------------------------------------------------------------------------------
// afterPlaying() -> Defines what to do once a buffer is streamed
//----------------------------------------------------------------------------------
void afterPlaying(void* /*clientData*/)
{
*env << "...done reading from upp buffer\n";
//videoSink->stopPlaying();
//Medium::close(videoSource);
// Note that this also closes the input file that this source read from. // Start playing once again to get the next stream
play(); /* We don't need to close the dev as long as we're reading from it. But if we do, use: close( "/dev/upp", O_RDWR);*/ } //----------------------------------------------------------------------------------------------
// play() Method -> Defines how to read and what to make of the input stream
//----------------------------------------------------------------------------------------------
void play()
{ /* Read nbytes of buffer (sizeof buf ) from the filedescriptor stream and assign them to address where buf is located */
read(stream, &buf, sizeof buf);
printf("Reading from UPP in to Buffer"); /*Open the input file as a 'byte-stream file source': */
ByteStreamMemoryBufferSource* buffSource
= ByteStreamMemoryBufferSource::createNew(*env, buf, sizeof buf,False/*Empty Buffer After Reading*/);
/*By passing False in the above creatNew() method means that the buffer would be read at once */ if (buffSource == NULL)
{
*env << "Unable to read from\"" << "Buffer"
<< "\" as a byte-stream source\n";
exit();
} FramedSource* videoES = buffSource;
// Create a framer for the Video Elementary Stream:
videoSource = H264VideoStreamFramer::createNew(*env, videoES,False);
// Finally, start playing:
*env << "Beginning to read from UPP...\n";
videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}

The Problem is that the code though compiles successfully but I'm unable to get the desired output. the RTSP stream on VLC player is on play mode however I can't see any video. I'd be grateful for any assistance in this matter. I might come as a little vague in my description but I'm happy to further explain any part that is required.

1 Answer

Okay so I figured out what needed to be done and am writing for the benefit of all who might face a similar issue. What I needed to do was modify my testH264VideoStreamer.cpp and DeviceSource.cpp file such that it directly reads data from the device (in my case it was the custom am1808 board), store it in a buffer and stream it. The changes I made were:

testH264VideoStreamer.cpp

#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h> //to allow read() function UsageEnvironment* env; H264VideoStreamFramer* videoSource;
RTPSink* videoSink; void play(); // forward
//-------------------------------------------------------------------------
//Entry Point -> Main FUNCTION
//------------------------------------------------------------------------- int main(int argc, char** argv) {
// Begin by setting up our usage environment:
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler); // Create 'groupsocks' for RTP and RTCP:
struct in_addr destinationAddress;
destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
// Note: This is a multicast address. If you wish instead to stream
// using unicast, then you should use the "testOnDemandRTSPServer"
// test program - not this test program - as a model. const unsigned short rtpPortNum = ;
const unsigned short rtcpPortNum = rtpPortNum+;
const unsigned char ttl = ; const Port rtpPort(rtpPortNum);
const Port rtcpPort(rtcpPortNum); Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
rtpGroupsock.multicastSendOnly(); // we're a SSM source
Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
rtcpGroupsock.multicastSendOnly(); // we're a SSM source // Create a 'H264 Video RTP' sink from the RTP 'groupsock':
OutPacketBuffer::maxSize = ;
videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, ); // Create (and start) a 'RTCP instance' for this RTP sink:
const unsigned estimatedSessionBandwidth = ; // in kbps; for RTCP b/w share
const unsigned maxCNAMElen = ;
unsigned char CNAME[maxCNAMElen+];
gethostname((char*)CNAME, maxCNAMElen);
CNAME[maxCNAMElen] = '\0'; // just in case
RTCPInstance* rtcp
= RTCPInstance::createNew(*env, &rtcpGroupsock,
estimatedSessionBandwidth, CNAME,
videoSink, NULL /* we're a server */,
True /* we're a SSM source */);
// Note: This starts RTCP running automatically RTSPServer* rtspServer = RTSPServer::createNew(*env, );
if (rtspServer == NULL) {
*env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
exit();
}
ServerMediaSession* sms
= ServerMediaSession::createNew(*env, "ipcamera","UPP Buffer" ,
"Session streamed by \"testH264VideoStreamer\"",
True /*SSM*/);
sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
rtspServer->addServerMediaSession(sms); char* url = rtspServer->rtspURL(sms);
*env << "Play this stream using the URL \"" << url << "\"\n";
delete[] url; // Start the streaming:
*env << "Beginning streaming...\n";
play(); env->taskScheduler().doEventLoop(); // does not return return ; // only to prevent compiler warning
}
//----------------------------------------------------------------------
//AFTER PLAY FUNCTION CALLED HERE
//----------------------------------------------------------------------
void afterPlaying(void* /*clientData*/)
{ play();
}
//------------------------------------------------------------------------
//PLAY FUNCTION ()
//------------------------------------------------------------------------
void play()
{ // Open the input file as with Device as the source:
DeviceSource* devSource
= DeviceSource::createNew(*env);
if (devSource == NULL)
{ *env << "Unable to read from\"" << "Buffer"
<< "\" as a byte-stream source\n";
exit();
} FramedSource* videoES = devSource; // Create a framer for the Video Elementary Stream:
videoSource = H264VideoStreamFramer::createNew(*env, videoES,False); // Finally, start playing:
*env << "Beginning to read from UPP...\n";
videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}

DeviceSource.cpp

#include "DeviceSource.hh"
#include <GroupsockHelper.hh> // for "gettimeofday()"
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
#include <string.h>
#include <unistd.h> //static uint8_t *buf = (uint8_t*)malloc(102400);
static uint8_t buf[];
int upp_stream;
//static uint8_t *bufPtr = buf; DeviceSource*
DeviceSource::createNew(UsageEnvironment& env)
{
return new DeviceSource(env);
} EventTriggerId DeviceSource::eventTriggerId = ; unsigned DeviceSource::referenceCount = ; DeviceSource::DeviceSource(UsageEnvironment& env):FramedSource(env)
{
if (referenceCount == )
{
upp_stream = open("/dev/upp",O_RDWR);
}
++referenceCount; if (eventTriggerId == )
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
} DeviceSource::~DeviceSource(void) {
--referenceCount;
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = ; if (referenceCount == )
{
}
} int loop_count; void DeviceSource::doGetNextFrame()
{
//for (loop_count=0; loop_count < 13; loop_count++)
//{
read(upp_stream,buf, ); //bufPtr+=8192; //}
deliverFrame();
} void DeviceSource::deliverFrame0(void* clientData)
{
((DeviceSource*)clientData)->deliverFrame();
} void DeviceSource::deliverFrame()
{
if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet u_int8_t* newFrameDataStart = (u_int8_t*) buf; //(u_int8_t*) buf; //%%% TO BE WRITTEN %%%
unsigned newFrameSize = sizeof(buf); //%%% TO BE WRITTEN %%% // Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
} else {
fFrameSize = newFrameSize;
}
gettimeofday(&fPresentationTime, NULL);
memmove(fTo, newFrameDataStart, fFrameSize);
FramedSource::afterGetting(this);
}

After compiling the code with these modifications, I was able to receive video stream on vlc player.

Live555 to stream live video and audio in one RTSP stream

http://stackoverflow.com/questions/26082837/live555-to-stream-live-video-and-audio-in-one-rtsp-stream

I have been able to stream video using live555 on its own as well as audio to stream using live555 on its own.

But I want to have the video and audio playing on the same VLC. My video is h264 encoded and audio is AAC encoded. What do I need to do to pass these packets into a FramedSource.

What MediaSubsession/DeviceSource do I override, as this is not a fixed file but live video/live audio?

Thanks in advance!

1 Answer

In order to stream video/H264 and audio/MPEG4-GENERIC in the same RTSP unicast session you should do something like :

#include "liveMedia.hh"
#include "BasicUsageEnvironment.hh" int main()
{
TaskScheduler* scheduler = BasicTaskScheduler::createNew();
BasicUsageEnvironment* env = BasicUsageEnvironment::createNew(*scheduler);
RTSPServer* rtspServer = RTSPServer::createNew(*env);
ServerMediaSession* sms = ServerMediaSession::createNew(*env);
sms->addSubsession(H264VideoFileServerMediaSubsession::createNew(*env, "test.264",false));
sms->addSubsession(ADTSAudioFileServerMediaSubsession::createNew(*env, "test.aac",false));
rtspServer->addServerMediaSession(sms);
}

对于trigerEvent的使用可以见文档:

http://stackoverflow.com/questions/13863673/how-to-write-a-live555-framedsource-to-allow-me-to-stream-h-264-live

http://stackoverflow.com/questions/19427576/live555-x264-stream-live-source-based-on-testondemandrtspserver

Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.

You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.

Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this

scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler); framedSource = H264FramedSource::createNew(*env, 0,0); h264VideoStreamDiscreteFramer
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource); // initialise the RTP Sink stuff here, look at
// testH264VideoStreamer.cpp to find out how videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink); env->taskScheduler().doEventLoop();

Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answer How does one encode a series of images into H264 using the x264 C API?

My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intensive purposes this stackoverflow question is answered because I was mostly after how to stream it. I hope this helps other people.

As for my FramedSource it looks a little something like this:

concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out; EventTriggerId H264FramedSource::eventTriggerId = ;
unsigned H264FramedSource::FrameSize = ;
unsigned H264FramedSource::referenceCount = ; int W = ;
int H = ; H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
} H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(),
fCurIndex()
{
if (referenceCount == )
{
}
++referenceCount; x264_param_default_preset(&param, "veryfast", "zerolatency");
param.i_threads = ;
param.i_width = ;
param.i_height = ;
param.i_fps_num = ;
param.i_fps_den = ;
// Intra refres:
param.i_keyint_max = ;
param.b_intra_refresh = ;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = ;
param.rc.f_rf_constant_max = ;
param.i_sps_id = ;
//For streaming:
param.b_repeat_headers = ;
param.b_annexb = ;
x264_param_apply_profile(&param, "baseline"); encoder = x264_encoder_open(&param);
pic_in.i_type = X264_TYPE_AUTO;
pic_in.i_qpplus1 = ;
pic_in.img.i_csp = X264_CSP_I420;
pic_in.img.i_plane = ; x264_picture_alloc(&pic_in, X264_CSP_I420, , );
convertCtx = sws_getContext(, , PIX_FMT_RGB24, , , PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (eventTriggerId == )
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
} H264FramedSource::~H264FramedSource()
{
--referenceCount;
if (referenceCount == )
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = ;
}
} void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*;
sws_scale(convertCtx, &surfaceData, &srcstride,, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = ;
int frame_size = -;
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false; if (frame_size >= )
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = ; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}
}
delete [] surfaceData;
surfaceData = NULL; envir().taskScheduler().triggerEvent(eventTriggerId, this);
} void H264FramedSource::doGetNextFrame()
{
deliverFrame();
} void H264FramedSource::deliverFrame0(void* clientData)
{
((H264FramedSource*)clientData)->deliverFrame();
} void H264FramedSource::deliverFrame()
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > && fPreferredFrameSize > ) {
if (fPresentationTime.tv_sec == && fPresentationTime.tv_usec == ) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/;
fPresentationTime.tv_usec = uSeconds%;
} // Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
} if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload; // Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
} memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}

Using Live555 to Stream Live Video from an IP camera connected to an H264 encoder的更多相关文章

  1. ffmpeg打开视频解码器失败:Could not find codec parameters for stream 0 (Video: h264): unspecified size

    在使用ffmpeg进行拉流分离音视频数据再解码播放操作的时候: 有时候经常会报错: Could not find codec parameters for stream 0 (Video: h264) ...

  2. 关于AXI4-Stream to Video Out 和 Video Timing Controller IP核学习

    关于AXI4-Stream to Video Out 和 Video Timing Controller IP核学习 1.AXI4‐Stream to Video Out Top‐Level Sign ...

  3. pygame save that Stream as video output.

    python - how to save pygame camera as video output - Stack Overflow  https://stackoverflow.com/quest ...

  4. live555 源代码简单分析1:主程序

    live555是使用十分广泛的开源流媒体服务器,之前也看过其他人写的live555的学习笔记,在这里自己简单总结下. live555源代码有以下几个明显的特点: 1.头文件是.hh后缀的,但没觉得和. ...

  5. 使用live555 在linux下搭建 rtsp server

    系统环境 Debian 7 x64  / centos 7 x64  都可以 首先去下载源码 http://www.live555.com/liveMedia/public/live555-lates ...

  6. live555 交叉编译移植到海思开发板

    本文章参考了.http://blog.csdn.net/lawishere/article/details/8182952,写了hi3518的配置说明.特此感谢 https://blog.csdn.n ...

  7. live555笔记_hi3516A

    1.简介 是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输是一个为流媒体提供解决方案的跨平台的C++开源项目,它实现了对标准流媒体传输协议如RTP/RTCP.RTSP.SI ...

  8. 庖丁解牛-----Live555源码彻底解密(根据MediaServer讲解Rtsp的建立过程)

    live555MediaServer.cpp服务端源码讲解 int main(int argc, char** argv) { // Begin by setting up our usage env ...

  9. live555流媒体框架介绍

    LIVE555 Streaming Media This code forms a set of C++ libraries for multimedia streaming, using open ...

随机推荐

  1. write() ,read();

    int main1(int argc ,char *argv[]){ if(argc < 2 ) return 0; int fd = open(argv[1] , O_RDONLY); if( ...

  2. javascript face ++

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/ ...

  3. oracle创建表(并且实现ID自增)

    CREATE TABLE STUDENT ( ID INT NOT NULL, NAME VARCHAR2(4000) NOT NULL, PRIMARY KEY(ID) ) TABLESPACE M ...

  4. 使用WebBrowser的记录

    第一:新建一个类,用了获取WebBrowser元素的类 //需要引用 Interop.SHDocVw 和 Microsoft.mshtmlpublic class Element { //根据Name ...

  5. PHP中,JS和CSS优化工具Minify的使用方法

    为减少HTTP请求,我们往往需要合并和压缩多个JS和CSS文件,下面记录下网上关于实现这个功能的PHP源码以及开源项目Minify的使用方法 一.实现合并和压缩多个JS和CSS文件的代码请参考 1.一 ...

  6. C语言结构体中的函数指针

      这篇文章简单的叙述一下函数指针在结构体中的应用,为后面的一系列文章打下基础 本文地址:http://www.cnblogs.com/archimedes/p/function-pointer-in ...

  7. Android中通过访问本地相册或者相机设置用户头像

    目前几乎所有的APP在用户注册时都会有设置头像的需求,大致分为三种情况: (1)通过获取本地相册的图片,经过裁剪后作为头像. (2)通过启动手机相机,现拍图片然后裁剪作为头像. (3)在APP中添加一 ...

  8. iOS 静态库中使用宏定义区分iPhone模拟器与真机---备用

    问题描述 一般项目中,可以使用宏定义来判断模拟器还是真机,这无疑是有效的. #if TARGET_IPHONE_SIMULATOR #define SIMULATOR 1 #elif TARGET_O ...

  9. django入门教程(下)

    在两篇文章帮你入门Django(上)一文中,我们已经做了一个简单的小网站,实现了保存用户数据到数据库,以及从后台数据库读取数据显示到网页上这两个功能. 看上去没有什么问题了,不过我们可以让它变得更加完 ...

  10. python学习之---函数进阶

    一,递归函数: 做程序应该都知道,在一个函数的内部还可以调用其它函数,这叫函数的调用,但是有一种特殊的情况,在一个函数内部对自身函数的调用,我们成这为函数的递归调用. 在此,使用一个家喻户晓的例子来演 ...