发布网友
共1个回答
热心网友
有两种方法,第一种是把文件转存到程序的目录下,转存的格式为.caf文件,使用的接口为AVAssetReader,AVAssetWriter,AVAssetReaderAudioMixOutput,AVAssetWriterInput;代码可以参考http://www.subfurther.com/blog/2010/12/13/from-ipod-library-to-pcm-samples-in-far-fewer-steps-than-were-previously-necessary/
上面说的很清楚了,而且代码也已经给出;
但这种方法缺点是速度慢,而且每读一个音频文件,都会在程序文件夹下生成一个.caf文件(.caf
文件大小约是.mp3的十倍)
第二种方法是直接把文件内容分块读入内存,主要用于音频解析:
<pre class="brush:objc;
toolbar: true; auto-links:
false;">
//传入参数就是获取到的MPMediaItem的AssetURL;
(void)loadToMemory:(NSURL*)asset_url
{
NSError
*reader_error=nil;
AVURLAsset *item_choosed_asset=[AVURLAsset
URLAssetWithURL:asset_url options:nil];
AVAssetReader
*item_reader=[AVAssetReader assetReaderWithAsset:item_choosed_asset
error:&reader_error];
if (reader_error) {
NSLog(@"failed
to creat asset reader,reason:%@",[reader_error
description]);
return;
}
NSArray
*asset_tracks=[item_choosed_asset tracks];
AVAssetReaderAudioMixOutput
*item_reader_output=[AVAssetReaderAudioMixOutput
assetReaderAudioMixOutputWithAudioTracks:asset_tracks
audioSettings:nil];
if ([item_reader canAddOutput:item_reader_output])
{
[item_reader addOutput:item_reader_output];
}else
{
NSLog(@"the reader can not add the
output");
}
UInt total_converted_bytes;
UInt
converted_count;
UInt converted_sample_num;
size_t
sample_size;
short* data_buffer=nil;
CMBlockBufferRef
next_buffer_data=nil;
[item_reader startReading];
while
(item_reader.status==AVAssetReaderStatusReading) {
CMSampleBufferRef
next_buffer=[item_reader_output copyNextSampleBuffer];
if
(next_buffer)
{
total_converted_bytes=CMSampleBufferGetTotalSampleSize(next_buffer);//next_buffer的总字节数;
sample_size=CMSampleBufferGetSampleSize(next_buffer,
0);//next_buffer中序号为0的sample的大小;
converted_sample_num=CMSampleBufferGetNumSamples(next_buffer);//next_buffer中所含sample的总个数;
NSLog(@"the
number of samples is
%f",(float)converted_sample_num);
NSLog(@"the size of the sample
is %f",(float)sample_size);
NSLog(@"the size of the whole buffer
is %f",(float)total_converted_bytes);
//copy the
data to the data_buffer
varible;
//这种方法中,我们每获得一次nextSampleBuffer后就对其进行解析,而不是把文件全部载入内存后再进行解析;
//AVAssetReaderOutput
的copyNextSampleBuffer方法每次读取8196个sample的数据(最后一次除外),这些数据是以short型存放在内存中(两字节为一单元)
//每个sample的大小和音频的声道数相关,可以用CMSampleBufferGetSampleSize来获得,所以每次调用copyNextSampleBuffer后所获得的数据大小为8196*sample_size(byte);
//据此,我们申请data_buffer时每次需要的空间也是固定的,为(8196*sample_size)/2个short型内存(每个short占两字节);
if
(!data_buffer) {
data_buffer= new
short[4096*sample_size];
}
next_buffer_data=CMSampleBufferGetDataBuffer(next_buffer);
OSStatus
buffer_status=CMBlockBufferCopyDataBytes(next_buffer_data, 0,
total_converted_bytes, data_buffer);
if
(buffer_status!=kCMBlockBufferNoErr) {
NSLog(@"something
wrong happened when copying data
bytes");
}
/*
此时音频的数据存储在data_buffer中,这些数据是音频原始数据(未经任何压缩),可以对其进行解析或其它操作
*/
}else {
NSLog(@"total sameple size %d",
converted_count);
size_t
total_data_length=CMBlockBufferGetDataLength(item_buffer);
NSLog(@"item
buffer length is
%f",(float)total_data_length);
break;
}
//CFRelease(next_buffer);
}
if
(item_reader.status==AVAssetReaderStatusCompleted) {
NSLog(@"read
over......");
}else {
NSLog(@"read
failed;");
}
}
</pre>
具体解析的话要看解析库方面的接口了