1
我正在拍摄来自相机的图像(使用UIImagePickerController
)并将其保存到文档目录。如果从相机拍摄图像未检测到使用UIImagePickerControllerSourceTypeCamera
然后,我使用CIDetector API
和CIfacefeature API
在不同的视图控制器中获取这些图像以获取正面部分。
问题是虽然我能够正确获取图像,但并未检测到人脸。如果我将相同的图像存储在它检测到的主包中。
我不知道问题在哪里??。我已经尝试了一切。可能问题出在UIImage
,或者可能是将图像保存在文档目录或相机中的格式。
请帮忙。我会感谢你。
- (void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString* path = [documentsDirectory stringByAppendingPathComponent:
[NSString stringWithString: @"SampleImage.jpg"] ];
NSData* data = UIImageJPEGRepresentation(image, 0);
[data writeToFile:path atomically:YES];
[picker dismissModalViewControllerAnimated:YES];
FCVC *fcvc = [[FCVC alloc] initwithImage:image];
[self.navigationController pushViewController:fcvc animated:YES];
}
在FCVC的viewDidLoad中我打电话以下功能通过传递:
-(void)markFaces:(UIImage *)pic
{
CIImage* image = [CIImage imageWithCGImage:pic.CGImage];
CGImageRef masterFaceImage;
CIDetector* detector = [CIDetector detectorOfType: CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
masterFaceImage = CGImageCreateWithImageInRect(facePicture.CGImage,CGRectMake(faceFeature.bounds.origin.x,faceFeature.bounds.origin.y, faceFeature.bounds.size.width,faceFeature.bounds.size.height));
}
self.masterExtractedFace = [UIImage imageWithCGImage:masterFaceImage];
}
在此先感谢。
我怎么能检测LandScapeMode – siva 2012-07-23 11:51:17
检查kCGImagePropertyOrientation图像 – sai 2012-07-27 05:58:47