我先回答你的第二个问题 - 不要等待应用程序崩溃时,您可以通过检查您正在阅读的CMSampleBufferRef中可用的样本数来停止从轨道中提取音频;例如(此代码也将出现在我的答案年下半年):
CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];
CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);
if (!sample || (numSamples == 0)) {
// handle end of audio track here
return;
}
关于你的第一个问题,这取决于音频您获取类型 - 也可能是凋谢PCM(非压缩)或VBR(压缩)格式。我甚至不打算解决PCM部分问题,因为通过网络将未经压缩的音频数据从一部手机发送到另一部手机并不明智 - 这不必要的昂贵并且会阻碍您的网络带宽。所以我们留下了VBR数据。为此,您必须发送您从样本中提取的AudioBuffer
和AudioStreamPacketDescription
的内容。但话又说回来,它可能是最好的解释,我的代码在说什么:
-(void)broadcastSample
{
[broadcastLock lock];
CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];
CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);
if (!sample || (numSamples == 0)) {
Packet *packet = [Packet packetWithType:PacketTypeEndOfSong];
packet.sendReliably = NO;
[self sendPacketToAllClients:packet];
[sampleBroadcastTimer invalidate];
return;
}
NSLog(@"SERVER: going through sample loop");
Boolean isBufferDataReady = CMSampleBufferDataIsReady(sample);
CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer(sample);
AudioBufferList audioBufferList;
CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sample,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&CMBuffer
),
"could not read sample data");
const AudioStreamPacketDescription * inPacketDescriptions;
size_t packetDescriptionsSizeOut;
size_t inNumberPackets;
CheckError(CMSampleBufferGetAudioStreamPacketDescriptionsPtr(sample,
&inPacketDescriptions,
&packetDescriptionsSizeOut),
"could not read sample packet descriptions");
inNumberPackets = packetDescriptionsSizeOut/sizeof(AudioStreamPacketDescription);
AudioBuffer audioBuffer = audioBufferList.mBuffers[0];
for (int i = 0; i < inNumberPackets; ++i)
{
NSLog(@"going through packets loop");
SInt64 dataOffset = inPacketDescriptions[i].mStartOffset;
UInt32 dataSize = inPacketDescriptions[i].mDataByteSize;
size_t packetSpaceRemaining = MAX_PACKET_SIZE - packetBytesFilled - packetDescriptionsBytesFilled;
size_t packetDescrSpaceRemaining = MAX_PACKET_DESCRIPTIONS_SIZE - packetDescriptionsBytesFilled;
if ((packetSpaceRemaining < (dataSize + AUDIO_STREAM_PACK_DESC_SIZE)) ||
(packetDescrSpaceRemaining < AUDIO_STREAM_PACK_DESC_SIZE))
{
if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
break;
}
memcpy((char*)packet + packetBytesFilled,
(const char*)(audioBuffer.mData + dataOffset), dataSize);
memcpy((char*)packetDescriptions + packetDescriptionsBytesFilled,
[self encapsulatePacketDescription:inPacketDescriptions[i]
mStartOffset:packetBytesFilled
],
AUDIO_STREAM_PACK_DESC_SIZE);
packetBytesFilled += dataSize;
packetDescriptionsBytesFilled += AUDIO_STREAM_PACK_DESC_SIZE;
// if this is the last packet, then ship it
if (i == (inNumberPackets - 1)) {
NSLog(@"woooah! this is the last packet (%d).. so we will ship it!", i);
if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
break;
}
}
[broadcastLock unlock];
}
,我已经在上面的代码中使用的方法,你不必担心,如添加头部中的一些方法到每个数据包(我正在创建自己的协议,您可以创建自己的协议)。欲了解更多信息,请参阅this教程。
- (BOOL)encapsulateAndShipPacket:(void *)source
packetDescriptions:(void *)packetDescriptions
packetID:(NSString *)packetID
{
// package Packet
char * headerPacket = (char *)malloc(MAX_PACKET_SIZE + AUDIO_BUFFER_PACKET_HEADER_SIZE + packetDescriptionsBytesFilled);
appendInt32(headerPacket, 'SNAP', 0);
appendInt32(headerPacket,packetNumber, 4);
appendInt16(headerPacket,PacketTypeAudioBuffer, 8);
// we use this so that we can add int32s later
UInt16 filler = 0x00;
appendInt16(headerPacket,filler, 10);
appendInt32(headerPacket, packetBytesFilled, 12);
appendInt32(headerPacket, packetDescriptionsBytesFilled, 16);
appendUTF8String(headerPacket, [packetID UTF8String], 20);
int offset = AUDIO_BUFFER_PACKET_HEADER_SIZE;
memcpy((char *)(headerPacket + offset), (char *)source, packetBytesFilled);
offset += packetBytesFilled;
memcpy((char *)(headerPacket + offset), (char *)packetDescriptions, packetDescriptionsBytesFilled);
NSData *completePacket = [NSData dataWithBytes:headerPacket length: AUDIO_BUFFER_PACKET_HEADER_SIZE + packetBytesFilled + packetDescriptionsBytesFilled];
NSLog(@"sending packet number %lu to all peers", packetNumber);
NSError *error;
if (![_session sendDataToAllPeers:completePacket withDataMode:GKSendDataReliable error:&error]) {
NSLog(@"Error sending data to clients: %@", error);
}
Packet *packet = [Packet packetWithData:completePacket];
// reset packet
packetBytesFilled = 0;
packetDescriptionsBytesFilled = 0;
packetNumber++;
free(headerPacket);
// free(packet); free(packetDescriptions);
return YES;
}
- (char *)encapsulatePacketDescription:(AudioStreamPacketDescription)inPacketDescription
mStartOffset:(SInt64)mStartOffset
{
// take out 32bytes b/c for mStartOffset we are using a 32 bit integer, not 64
char * packetDescription = (char *)malloc(AUDIO_STREAM_PACK_DESC_SIZE);
appendInt32(packetDescription, (UInt32)mStartOffset, 0);
appendInt32(packetDescription, inPacketDescription.mVariableFramesInPacket, 4);
appendInt32(packetDescription, inPacketDescription.mDataByteSize,8);
return packetDescription;
}
接收数据:
- (void)receiveData:(NSData *)data fromPeer:(NSString *)peerID inSession:(GKSession *)session context:(void *)context
{
Packet *packet = [Packet packetWithData:data];
if (packet == nil)
{
NSLog(@"Invalid packet: %@", data);
return;
}
Player *player = [self playerWithPeerID:peerID];
if (player != nil)
{
player.receivedResponse = YES; // this is the new bit
} else {
Player *player = [[Player alloc] init];
player.peerID = peerID;
[_players setObject:player forKey:player.peerID];
}
if (self.isServer)
{
[Logger Log:@"SERVER: we just received packet"];
[self serverReceivedPacket:packet fromPlayer:player];
}
else
[self clientReceivedPacket:packet];
}
注:
有很多的网络的详细信息,我没有在这里(包括即在接收数据部分。我用了很多定制的对象而没有扩展它们的定义)。我不是因为解释所有这些超出了SO上的一个答案的范围。但是,您可以按照Ray Wenderlich的excellent tutorial。他花时间解释网络原理,而我在上面使用的架构几乎完全是从他那里得到的。但是有一个捕获(见下一点)
根据你的项目,GKSession可能不适合(特别是如果你的项目是实时的,或者如果你需要超过2-3个设备同时连接),它有一个很多limitations。您将不得不深入挖掘并直接使用Bonjour。iPhone cool projects有一个很好的快速章节,给出了使用Bonjour服务的一个很好的例子。它听起来并不像听起来那么可怕(苹果的文档在这方面有点霸道)。
我注意到你使用GCD进行多线程。同样,如果你正在处理实时,那么你不希望使用先进的框架来为你做繁重的工作(GCD就是其中之一)。有关此主题的更多信息,请阅读此优秀article。还请阅读我和justin之间的长时间讨论this的回答。
您可能想要查看iOS 6中引入的MTAudioProcessingTap。它可以在处理AVAsset时为您节省一些麻烦。我没有测试这个东西。它在我完成所有工作后才出来。
最后但并非最不重要的,你可能想看看learning core audio书。这是一个广泛公认的关于这个问题的参考。我记得当你问这个问题的时候,你一直在坚持。核心音频是沉重的责任,需要时间沉入。SO只会给你指针。你将不得不花时间吸收材料,然后你会弄清楚事情是如何实现的。祝你好运!
这非常有帮助,我还在研究它。但关于如何处理数据包,当客户端收到它我发现我不知道那么多。我会让你知道,如果我解决它。 – lancy
我更新了我的答案 – abbood
再次感谢您。帮助我很多。但我想我还有很长的路要走.. – lancy