我写了下面的代码到怀旧滤镜应用于图像:目标C提高CIImage过滤速度
- (void)applySepiaFilter {
// Set previous image
NSData *buffer = [NSKeyedArchiver archivedDataWithRootObject: self.mainImage.image];
[_images push:[NSKeyedUnarchiver unarchiveObjectWithData: buffer]];
UIImage* u = self.mainImage.image;
CIImage *image = [[CIImage alloc] initWithCGImage:u.CGImage];
CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"
keysAndValues: kCIInputImageKey, image,
@"inputIntensity", @0.8, nil];
CIImage *outputImage = [filter outputImage];
self.mainImage.image = [self imageFromCIImage:outputImage];
}
- (UIImage *)imageFromCIImage:(CIImage *)ciImage {
CIContext *ciContext = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [ciContext createCGImage:ciImage fromRect:[ciImage extent]];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return image;
}
当我运行这段代码似乎落后1-2秒。我听说核心图像比核心图形更快,但我对渲染时间无动于衷。我想知道这是否会在CoreGraphics甚至OpenCV(在项目中的其他地方使用)中更快地处理?如果没有,我有什么办法可以优化这段代码以更快运行?
你用仪器找出这里真的很慢吗? – zneak
@zneak这些工具是什么? –
掉下我的脑袋,像“时间分析器”。 – zneak