我正在从服务器传输H264 NAL,将它们作为FLV标签封装,并使用appendBytes(数据生成模式)将它们传递到NetStream中。但是,当视频正常播放时,流会延迟大约一秒钟。在H264 NAL中使用NetStream.appendBytes时有没有办法阻止缓冲?
我试着设置bufferTime,bufferTimeMax但没有运气,以防止缓冲继续。
我也尝试了各种组合的NetStream.seek()和NetStream.appendBytesAction()与RESET_SEEK和END_SEQUENCE,再次无济于事。
是否有一招我在这里失踪,有没有办法来防止这种延迟?
有趣的是,我没有看到我传入的音频(PCMU)的延迟,所以我最终出现了唇形同步问题。
更新时间:仍然停留,所以张贴我使用的代码:
var timestamp : uint = networkPayload.readUnsignedInt();
if (videoTimestampBase == 0) {
videoTimestampBase = timestamp;
}
timestamp = timestamp - videoTimestampBase;
timestamp = timestamp/90.0;
// skip 7 bytes of marker
networkPayload.position = 7;
var nalType : int = networkPayload.readByte();
nalType &= 0x1F;
networkPayload.position = 7;
// reformat Annex B bitstream encoding, to Mp4 - remove timestamp and bitstream marker (3 bytes)
var mp4Payload : ByteArray = new ByteArray();
var mp4PayloadLength : int = networkPayload.bytesAvailable;
mp4Payload.writeUnsignedInt(mp4PayloadLength);
mp4Payload.writeBytes(networkPayload, 7, mp4PayloadLength);
mp4Payload.position = 0;
if (nalType == 8) {
// PPS
ppsNAL = new ByteArray();
// special case for PPS/SPS - don't length encode
ppsLength = mp4Payload.bytesAvailable - 4;
ppsNAL.writeBytes(mp4Payload, 4, mp4Payload.bytesAvailable - 4);
if (spsNAL == null) {
return;
}
} else if (nalType == 7) {
// SPS
spsNAL = new ByteArray();
// special case for PPS/SPS - don't length encode
spsLength = mp4Payload.bytesAvailable - 4;
spsNAL.writeBytes(mp4Payload, 4, mp4Payload.bytesAvailable - 4);
if (ppsNAL == null) {
return;
}
}
if ((spsNAL != null) && (ppsNAL != null)) {
Log.debug(TAG, "Writing sequence header: " + spsLength + "," + ppsLength + "," + timestamp);
var sequenceHeaderTag : FLVTagVideo = new FLVTagVideo();
sequenceHeaderTag.codecID = FLVTagVideo.CODEC_ID_AVC;
sequenceHeaderTag.frameType = FLVTagVideo.FRAME_TYPE_KEYFRAME;
sequenceHeaderTag.timestamp = timestamp;
sequenceHeaderTag.avcPacketType = FLVTagVideo.AVC_PACKET_TYPE_SEQUENCE_HEADER;
spsNAL.position = 1;
var profile : int = spsNAL.readByte();
var compatibility : int = spsNAL.readByte();
var level : int = spsNAL.readByte();
Log.debug(TAG, profile + "," + compatibility + "," + level + "," + spsLength);
var avcc : ByteArray = new ByteArray();
avcc.writeByte(0x01); // avcC version 1
// profile, compatibility, level
avcc.writeByte(profile);
avcc.writeByte(compatibility);
avcc.writeByte(0x20); //level);
avcc.writeByte(0xff); // 111111 + 2 bit NAL size - 1
avcc.writeByte(0xe1); // number of SPS
avcc.writeByte(spsLength >> 8); // 16-bit SPS byte count
avcc.writeByte(spsLength);
avcc.writeBytes(spsNAL, 0, spsLength); // the SPS
avcc.writeByte(0x01); // number of PPS
avcc.writeByte(ppsLength >> 8); // 16-bit PPS byte count
avcc.writeByte(ppsLength);
avcc.writeBytes(ppsNAL, 0, ppsLength);
sequenceHeaderTag.data = avcc;
// clear the pps/sps til next buffer
var bytes : ByteArray = new ByteArray();
sequenceHeaderTag.write(bytes);
stream.appendBytes(bytes);
ppsNAL = null;
spsNAL = null;
} else {
if ((timestamp != currentTimestamp) || (currentVideoTag == null)) {
if (currentVideoTag != null) {
currentVideoTag.data = currentSegment;
var tagData : ByteArray = new ByteArray();
currentVideoTag.write(tagData);
stream.appendBytes(tagData);
}
currentVideoTag = new FLVTagVideo();
currentVideoTag.codecID = FLVTagVideo.CODEC_ID_AVC;
currentVideoTag.frameType = FLVTagVideo.FRAME_TYPE_INTER;
if (nalType == 5) {
currentVideoTag.frameType = FLVTagVideo.FRAME_TYPE_KEYFRAME;
}
lastNalType = nalType;
currentVideoTag.avcPacketType = FLVTagVideo.AVC_PACKET_TYPE_NALU;
currentVideoTag.timestamp = timestamp;
currentVideoTag.avcCompositionTimeOffset = 0;
currentSegment = new ByteArray();
currentTimestamp = timestamp;
}
mp4Payload.position = 0;
currentSegment.writeBytes(mp4Payload);
}
更新,一些详细信息,这里是正在传递的时间戳:
DEBUG: StreamPlayback: 66,-32,20,19
DEBUG: StreamPlayback: Timestamp: 0
DEBUG: StreamPlayback: Timestamp: 63
DEBUG: StreamPlayback: stream status update: netStatus NetStream.Buffer.Full
DEBUG: StreamPlayback: Timestamp: 137
DEBUG: StreamPlayback: Timestamp: 200
DEBUG: StreamPlayback: Timestamp: 264
DEBUG: StreamPlayback: Timestamp: 328
DEBUG: StreamPlayback: Timestamp: 403
DEBUG: StreamPlayback: Timestamp: 467
DEBUG: StreamPlayback: Timestamp: 531
DEBUG: StreamPlayback: Timestamp: 595
DEBUG: StreamPlayback: Timestamp: 659
DEBUG: StreamPlayback: Timestamp: 723
DEBUG: StreamPlayback: Timestamp: 830
DEBUG: StreamPlayback: Timestamp: 894
DEBUG: StreamPlayback: Timestamp: 958
DEBUG: StreamPlayback: Timestamp: 1021
DEBUG: StreamPlayback: Timestamp: 1086
DEBUG: StreamPlayback: Timestamp: 1161
DEBUG: StreamPlayback: Timestamp: 1225
DEBUG: StreamPlayback: Timestamp: 1289
DEBUG: StreamPlayback: Timestamp: 1353
DEBUG: StreamPlayback: Timestamp: 1418
DEBUG: StreamPlayback: Timestamp: 1491
DEBUG: StreamPlayback: Timestamp: 1556
DEBUG: StreamPlayback: Timestamp: 1633
DEBUG: StreamPlayback: Timestamp: 1684
DEBUG: StreamPlayback: Timestamp: 1747
DEBUG: StreamPlayback: stream status update: netStatus NetStream.Video.DimensionChange
DEBUG: StreamPlayback: Timestamp: 1811
干杯,
Kev
可能是一个时间戳的问题?首先输入两个音频标签(连续追加),然后按照该顺序追加一个视频标签(帧)......'bufferTime'等只是负责任何“提前”解码,因此当播放头到达它时,内容就已准备就绪。使用** H.264 **时,它不能被停止,因为在显示当前帧图像之前,解码器需要一组“图片”(供参考)。 –
我实际上已经关闭atm音频,所以它只是视频流。时间戳由RTP时间戳除以90生成,以使其达到毫秒。我会看一看,看看里面是否有东西混在一起。净结果,但你说我不需要冲洗流,反正它应该立即播放。 –
是的,不要每次追加刷新。只要继续追加和Flash解码器照顾的事情。如果您使用'Reset_Seek',解码器现在预期**关键帧**视频标签。所有音频标签都是音频关键帧。 –