ios - AVCaptureSession with multiple previews -


i have avcapturesession running avcapturevideopreviewlayer.

i can see video know it's working.

however, i'd have collection view , in each cell add preview layer each cell shows preview of video.

if try pass preview layer cell , add sublayer removes layer other cells ever displays in 1 cell @ time.

is there (better) way of doing this?

i ran same problem of needing multiple live views displayed @ same time. answer of using uiimage above slow needed. here 2 solutions found:

1. careplicatorlayer

the first option use careplicatorlayer duplicate layer automatically. docs say, automatically create "...a specified number of copies of sublayers (the source layer), each copy potentially having geometric, temporal , color transformations applied it."

this super useful if there isn't lot of interaction live previews besides simple geometric or color transformations (think photo booth). have seen careplicatorlayer used way create 'reflection' effect.

here sample code replicate cacapturevideopreviewlayer:

init avcapturevideopreviewlayer

avcapturevideopreviewlayer *previewlayer = [[avcapturevideopreviewlayer alloc] initwithsession:session]; [previewlayer setvideogravity:avlayervideogravityresizeaspectfill]; [previewlayer setframe:cgrectmake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height / 4)]; 

init careplicatorlayer , set properties

note: replicate live preview layer four times.

nsuinteger replicatorinstances = 4;  careplicatorlayer *replicatorlayer = [careplicatorlayer layer]; replicatorlayer.frame = cgrectmake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height / replicatorinstances); replicatorlayer.instancecount = instances; replicatorlayer.instancetransform = catransform3dmaketranslation(0.0, self.view.bounds.size.height / replicatorinstances, 0.0); 

add layers

note: experience need add layer want replicate careplicatorlayer sublayer.

[replicatorlayer addsublayer:previewlayer]; [self.view.layer addsublayer:replicatorlayer]; 

downsides

a downside using careplicatorlayer handles placement of layer replications. apply set transformations each instance , and contained within itself. e.g. there no way have replication of avcapturevideopreviewlayer on 2 separate cells.


2. manually rendering samplebuffer

this method, albeit tad more complex, solves above mentioned downside of careplicatorlayer. manually rendering live previews, able render many views want. granted, performance might affected.

note: there might other ways render samplebuffer chose opengl because of performance. code inspired , altered cifunhouse.

here how implemented it:

2.1 contexts , session

setup opengl , coreimage context

_eaglcontext = [[eaglcontext alloc] initwithapi:keaglrenderingapiopengles2];  // note: must done after glkviews set _cicontext = [cicontext contextwitheaglcontext:_eaglcontext                                        options:@{kcicontextworkingcolorspace : [nsnull null]}]; 

dispatch queue

this queue used session , delegate.

self.capturesessionqueue = dispatch_queue_create("capture_session_queue", null); 

init avsession & avcapturevideodataoutput

note: have removed device capability checks make more readable.

dispatch_async(self.capturesessionqueue, ^(void) {     nserror *error = nil;      // input device , validate settings     nsarray *videodevices = [avcapturedevice deviceswithmediatype:avmediatypevideo];      avcapturedevice *_videodevice = nil;     if (!_videodevice) {         _videodevice = [videodevices objectatindex:0];     }      // obtain device input     avcapturedeviceinput *videodeviceinput = [avcapturedeviceinput deviceinputwithdevice:self.videodevice error:&error];      // obtain preset , validate preset     nsstring *preset = avcapturesessionpresetmedium;      // coreimage wants bgra pixel format     nsdictionary *outputsettings = @{(id)kcvpixelbufferpixelformattypekey : @(kcvpixelformattype_32bgra)};      // create capture session     self.capturesession = [[avcapturesession alloc] init];     self.capturesession.sessionpreset = preset;     : 

note: following code 'magic code'. create , add dataoutput avsession can intercept camera frames using delegate. breakthrough needed figure out how solve problem.

    :     // create , configure video data output     avcapturevideodataoutput *videodataoutput = [[avcapturevideodataoutput alloc] init];     videodataoutput.videosettings = outputsettings;     [videodataoutput setsamplebufferdelegate:self queue:self.capturesessionqueue];      // begin configure capture session     [self.capturesession beginconfiguration];      // connect video device input , video data , still image outputs     [self.capturesession addinput:videodeviceinput];     [self.capturesession addoutput:videodataoutput];      [self.capturesession commitconfiguration];      // start     [self.capturesession startrunning]; }); 

2.2 opengl views

we using glkview render our live previews. if want 4 live previews, need 4 glkview.

self.livepreviewview = [[glkview alloc] initwithframe:self.bounds context:self.eaglcontext]; self.livepreviewview = no; 

because native video image camera in uideviceorientationlandscapeleft (i.e. home button on right), need apply clockwise 90 degree transform can draw video preview if in landscape-oriented view; if you're using front camera , want have mirrored preview (so user seeing in mirror), need apply additional horizontal flip (by concatenating cgaffinetransformmakescale(-1.0, 1.0) rotation transform)

self.livepreviewview.transform = cgaffinetransformmakerotation(m_pi_2); self.livepreviewview.frame = self.bounds;     [self addsubview: self.livepreviewview]; 

bind frame buffer frame buffer width , height. bounds used cicontext when drawing glkview in pixels (not points), hence need read frame buffer's width , height.

[self.livepreviewview binddrawable]; 

in addition, since accessing bounds in queue (_capturesessionqueue), want obtain piece of information won't accessing _videopreviewview's properties thread/queue.

_videopreviewviewbounds = cgrectzero; _videopreviewviewbounds.size.width = _videopreviewview.drawablewidth; _videopreviewviewbounds.size.height = _videopreviewview.drawableheight;  dispatch_async(dispatch_get_main_queue(), ^(void) {     cgaffinetransform transform = cgaffinetransformmakerotation(m_pi_2);              // *horizontally flip here, if using front camera.*      self.livepreviewview.transform = transform;     self.livepreviewview.frame = self.bounds; }); 

note: if using front camera can horizontally flip live preview this:

transform = cgaffinetransformconcat(transform, cgaffinetransformmakescale(-1.0, 1.0)); 

2.3 delegate implementation

after have contexts, sessions, , glkviews set can render our views avcapturevideodataoutputsamplebufferdelegate method captureoutput:didoutputsamplebuffer:fromconnection:

- (void)captureoutput:(avcaptureoutput *)captureoutput didoutputsamplebuffer:(cmsamplebufferref)samplebuffer fromconnection:(avcaptureconnection *)connection {     cmformatdescriptionref formatdesc = cmsamplebuffergetformatdescription(samplebuffer);      // update video dimensions information     self.currentvideodimensions = cmvideoformatdescriptiongetdimensions(formatdesc);      cvimagebufferref imagebuffer = cmsamplebuffergetimagebuffer(samplebuffer);     ciimage *sourceimage = [ciimage imagewithcvpixelbuffer:(cvpixelbufferref)imagebuffer options:nil];      cgrect sourceextent = sourceimage.extent;     cgfloat sourceaspect = sourceextent.size.width / sourceextent.size.height; 

you need have reference each glkview , it's videopreviewviewbounds. easiness, assume both contained in uicollectionviewcell. need alter own use-case.

    for(customlivepreviewcell *cell in self.livepreviewcells) {         cgfloat previewaspect = cell.videopreviewviewbounds.size.width  / cell.videopreviewviewbounds.size.height;          // maintain aspect radio of screen size, clip video image         cgrect drawrect = sourceextent;         if (sourceaspect > previewaspect) {             // use full height of video image, , center crop width             drawrect.origin.x += (drawrect.size.width - drawrect.size.height * previewaspect) / 2.0;             drawrect.size.width = drawrect.size.height * previewaspect;         } else {             // use full width of video image, , center crop height             drawrect.origin.y += (drawrect.size.height - drawrect.size.width / previewaspect) / 2.0;             drawrect.size.height = drawrect.size.width / previewaspect;         }          [cell.livepreviewview binddrawable];          if (_eaglcontext != [eaglcontext currentcontext]) {             [eaglcontext setcurrentcontext:_eaglcontext];         }          // clear eagl view grey         glclearcolor(0.5, 0.5, 0.5, 1.0);         glclear(gl_color_buffer_bit);          // set blend mode "source over" ci use         glenable(gl_blend);         glblendfunc(gl_one, gl_one_minus_src_alpha);          if (sourceimage) {             [_cicontext drawimage:sourceimage inrect:cell.videopreviewviewbounds fromrect:drawrect];         }          [cell.livepreviewview display];     } } 

this solution lets have many live previews want using opengl render buffer of images received avcapturevideodataoutputsamplebufferdelegate.

3. sample code

here github project threw both soultions: https://github.com/johnnyslagle/multiple-camera-feeds


Comments

Popular posts from this blog

jquery - How can I dynamically add a browser tab? -

node.js - Getting the socket id,user id pair of a logged in user(s) -

keyboard - C++ GetAsyncKeyState alternative -