2013-06-05 100 views
2

我正在为iOS编写OpenGL应用程序,并且需要呈现渲染场景的应用程序内截图。当我不使用多重采样时,所有工作都正常。但是当我打开多重采样时,glReadPixels不会返回正确的数据(场景绘制正确 - 图形质量比多重采样好得多)。glReadPixels返回零点与多重采样

我已经签一堆类似的问题在SO,并在其他一些地方,但它们都没有解决我的问题,因为我已经在做了关于拟议方式:

  1. 我缓冲之后,采取截图已解决,但呈现呈现缓冲区之前。
  2. glReadPixels不返回错误。
  3. 我甚至试图设置kEAGLDrawablePropertyRetainedBackingYES并在缓冲区显示后进行屏幕截图 - 也不起作用。
  4. 我支持OpenGLES 1.x渲染API(背景与kEAGLRenderingAPIOpenGLES1初始化)

基本上我的想法是什么可能是错误的。在SO上发布问题是我的最后手段。

这是相关的源代码:

创建帧缓冲器

- (BOOL)createFramebuffer 
{ 

    glGenFramebuffersOES(1, &viewFramebuffer); 
    glGenRenderbuffersOES(1, &viewRenderbuffer); 

    glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); 
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); 
    [context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer]; 
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer); 

    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth); 
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight); 

    // Multisample support 

    glGenFramebuffersOES(1, &sampleFramebuffer); 
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, sampleFramebuffer); 

    glGenRenderbuffersOES(1, &sampleColorRenderbuffer); 
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, sampleColorRenderbuffer); 
    glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGBA8_OES, backingWidth, backingHeight); 
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, sampleColorRenderbuffer); 

    glGenRenderbuffersOES(1, &sampleDepthRenderbuffer); 
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, sampleDepthRenderbuffer); 
    glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight); 
    glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, sampleDepthRenderbuffer); 

    // End of multisample support 

    if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) { 
     NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)); 
     return NO; 
    } 

    return YES; 
} 

解决缓冲器部分,并采取快照

glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, viewFramebuffer); 
    glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, sampleFramebuffer); 
    glResolveMultisampleFramebufferAPPLE(); 
    [self checkGlError]; 

    //glFinish(); 

    if (capture) 
     captureImage = [self snapshot:self];  

    const GLenum discards[] = {GL_COLOR_ATTACHMENT0_OES,GL_DEPTH_ATTACHMENT_OES}; 
    glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE,2,discards); 

    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);  

    [context presentRenderbuffer:GL_RENDERBUFFER_OES];  

快照方法(基本上从苹果复制docs)

- (UIImage*)snapshot:(UIView*)eaglview 
{ 

    // Bind the color renderbuffer used to render the OpenGL ES view 
    // If your application only creates a single color renderbuffer which is already bound at this point, 
    // this call is redundant, but it is needed if you're dealing with multiple renderbuffers. 
    // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.  
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); 


    NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight; 
    NSInteger dataLength = width * height * 4; 
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte)); 

    // Read pixel data from the framebuffer 
    glPixelStorei(GL_PACK_ALIGNMENT, 4); 
    [self checkGlError]; 
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data); 
    [self checkGlError]; 

    // Create a CGImage with the pixel data 
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel 
    // otherwise, use kCGImageAlphaPremultipliedLast 
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL); 
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB(); 
    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, 
           ref, NULL, true, kCGRenderingIntentDefault); 

    // OpenGL ES measures data in PIXELS 
    // Create a graphics context with the target size measured in POINTS 
    NSInteger widthInPoints, heightInPoints; 
    if (NULL != UIGraphicsBeginImageContextWithOptions) { 
     // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration 
     // Set the scale parameter to your OpenGL ES view's contentScaleFactor 
     // so that you get a high-resolution snapshot when its value is greater than 1.0 
     CGFloat scale = eaglview.contentScaleFactor; 
     widthInPoints = width/scale; 
     heightInPoints = height/scale; 
     UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale); 
    } 
    else { 
     // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext 
     widthInPoints = width; 
     heightInPoints = height; 
     UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints)); 
    } 

    CGContextRef cgcontext = UIGraphicsGetCurrentContext(); 

    // UIKit coordinate system is upside down to GL/Quartz coordinate system 
    // Flip the CGImage by rendering it to the flipped bitmap context 
    // The size of the destination area is measured in POINTS 
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy); 
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref); 

    // Retrieve the UIImage from the current context 
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); 

    UIGraphicsEndImageContext(); 

    // Clean up 
    free(data); 
    CFRelease(ref); 
    CFRelease(colorspace); 
    CGImageRelease(iref); 

    return image; 
} 

回答

2

您通过绑定viewFramebuffer为平局帧缓冲和sampleFramebuffer为已读帧缓冲后做一个glResolveMultisampleFramebufferAPPLE解决多重采样缓冲区如常。但是您是否还记得将viewFramebuffer作为读帧缓冲区(glBindFramebuffer(GL_READ_FRAMEBUFFER, viewFramebuffer))然后在glReadPixels之前绑定? glReadPixels将始终从当前绑定的读取帧缓冲区中读取,并且如果在多重采样解析后没有更改此绑定,这仍然是多重采样帧缓冲区,而不是默认值。

我还发现您的glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer) -calls相当刺激性的,因为这并没有真正做什么有意义的事情,当前绑定的渲染是仅适用于renderbuffers工作职能相关的(实际上仅glRenderbufferStorage)(但也可能是ES确实一些有意义的东西和绑定它需要[context presentRenderbuffer:GL_RENDERBUFFER_OES]工作)。但尽管如此,也许你认为这约束力还控制glReadPixels将读取缓冲区,但这不是情况下,它总是会从目前的帧缓冲势必GL_READ_FRAMEBUFFER阅读。

+0

感谢您的回答。我会在大约12个小时内到达我的电脑时尝试它,如果它解决了我的问题,我会接受您的答案。多重采样解决后,我没有绑定framebuffer,所以你的回答是有道理的。出于某种原因,我认为glResolveMultisampleFramebufferAPPLE会自动执行此操作。 (我正在搜索这个方法的文档,没有运气)。关于glBindRenderBufferOES你可能是对的,但所有上面的代码是或多或少从苹果的例子复制粘贴,所以我只是想玩安全:) – Kovasandra

+0

这个答案解决了我的问题。再一次非常感谢你 :) – Kovasandra