2015年4月8日 星期三

Dump H264 Raw Data From LIVE555 Client


      Maybe lots persons encounter the same circumstances : after the succession of building LIVE555, and execute the testRTSPClient to connect a rtsp server: the debug log  seems be pettey good; But when we damp the h264 data to be played by VLC or ffplay, that is unplayable. Of cource, if we deliver the h264 data to a h264 decoder directly, there would occur error always.


   I found there is no article being concise with complete code  in the website, I fill it in here.

The code base on LIVE555\testProgs\testRTSPClient.cpp.

modify the constructor and destructor of DummySink as below, it is about line 479:


unsigned char *pH264 = NULL;

DummySink::DummySink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId)
  : MediaSink(env),
    fSubsession(subsession) {
  fStreamId = strDup(streamId);
  fReceiveBuffer = new u_int8_t[DUMMY_SINK_RECEIVE_BUFFER_SIZE];
  pH264 = (unsigned char*)malloc(256*1024);
}

DummySink::~DummySink() {
  delete[] fReceiveBuffer;
  delete[] fStreamId;
  if(NULL != pH264)
   free(pH264);
}



And modify afterGettingFrame (the one with debug log ) as


void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
      struct timeval presentationTime, unsigned /*durationInMicroseconds*/) {
#if(0)
  // We've just received a frame of data.  (Optionally) print out information about it:
#ifdef DEBUG_PRINT_EACH_RECEIVED_FRAME
  if (fStreamId != NULL) envir() << "Stream \"" << fStreamId << "\"; ";
  envir() << fSubsession.mediumName() << "/" << fSubsession.codecName() << ":\tReceived " << frameSize << " bytes";
  if (numTruncatedBytes > 0) envir() << " (with " << numTruncatedBytes << " bytes truncated)";
  char uSecsStr[6+1]; // used to output the 'microseconds' part of the presentation time
  sprintf(uSecsStr, "%06u", (unsigned)presentationTime.tv_usec);
  envir() << ".\tPresentation time: " << (int)presentationTime.tv_sec << "." << uSecsStr;
  if (fSubsession.rtpSource() != NULL && !fSubsession.rtpSource()->hasBeenSynchronizedUsingRTCP()) {
    envir() << "!"; // mark the debugging output to indicate that this presentation time is not RTCP-synchronized
  }
#ifdef DEBUG_PRINT_NPT
  envir() << "\tNPT: " << fSubsession.getNormalPlayTime(presentationTime);
#endif
  envir() << "\n";
#endif
#else
  printf("__FUNCTION__  = %s\n", __FUNCTION__ );
 if(0 == strncmp(fSubsession.codecName(), "H264", 16))
 { 
  unsigned char nalu_header[4] = { 0, 0, 0, 1 };   
  unsigned char extraData[256]; 
  unsigned int num = 0;    
  
  SPropRecord *pSPropRecord;
  pSPropRecord = parseSPropParameterSets(fSubsession.fmtp_spropparametersets(), num);  
  
  unsigned int extraLen;
  extraLen = 0;

  //p_record[0] is sps 
  //p+record[1] is pps
  for(unsigned int i = 0; i < num; i++){ 
   memcpy(&extraData[extraLen], &nalu_header[0], 4);
   extraLen += 4;
   memcpy(&extraData[extraLen], pSPropRecord[i].sPropBytes, pSPropRecord[i].sPropLength);
   extraLen += pSPropRecord[i].sPropLength;
  }/*for i*/

  memcpy(&extraData[extraLen], &nalu_header[0], 4);
  extraLen += 4;

  delete[] pSPropRecord ;  

  memcpy(pH264, &extraData[0], extraLen);
  memcpy(pH264 + extraLen, fReceiveBuffer, frameSize); 
  
  int totalSize;
  totalSize = extraLen + frameSize;

  static FILE *fp = fopen("saved.h264", "wb");

  fwrite(pH264, 1,  totalSize, fp);
  fflush(fp);
  printf("\tsaved %d bytes\n", totalSize);
 }/*if 0 == strncmp(fSubsession.codecName(), "H264", 16)*/
#endif  
  // Then continue, to request the next frame of data:
  continuePlaying();
}





That is, LIVE555 server  omits the SPS(Sequence Parameter Set) and PPS(Picture Parameter Set) parser information of each h264 slice.

Here  I compliment SPS and PPS  flag, which be "0x00 0x00 0x00 0x01".


You could refer to here for more detail about SPS and PPS.

2 則留言:

  1. The compilation succeeded, and the video file (saved.h264) was saved to the project's Debug folder from a IP (network) camera.
     
    Likewise, do you know how to save an audio file (pcm or wav)?

    please help me.

    Thank you.

    回覆刪除
    回覆
    1. I Have never engaged in RTSP audio work, but maybe you could refer to my another post http://gaiger-programming.blogspot.com/2015/03/understand-wave-format-and-implement.html, where I have illuminated what is WAV format. However, the most audio streaming in RTSP is not in uncompressed format, Its help is limited.

      刪除