ios - CIDetector give wrong position on facial features -


now know coordinate system messed up. have tried reversing view , imageview, nothing. tried reverse coordinates on features , still same problem. know detects faces, eyes , mouth, when try place overlaying boxes samples codes, out of position (to exact, on right off-screen). im stumped why happening.

ill post code because know of guys specificity:

-(void)facedetector {     // load picture face detection //    uiimageview* image = [[uiimageview alloc] initwithimage:mainimage];     [self.imageview setimage:mainimage];     [self.imageview setuserinteractionenabled:yes];      // draw face detection image //    [self.view addsubview:self.imageview];      // execute method used markfaces in background //    [self performselectorinbackground:@selector(markfaces:) withobject:self.imageview];      // flip image on y-axis match coordinate system used core image //    [self.imageview settransform:cgaffinetransformmakescale(1, -1)];      // flip entire window make right side //    [self.view settransform:cgaffinetransformmakescale(1, -1)];  //    [toolbar settransform:cgaffinetransformmakescale(1, -1)];     [toolbar setframe:cgrectmake(0, 0, 320, 44)];      // execute method used markfaces in background     [self performselectorinbackground:@selector(markfaces:) withobject:_imageview]; //    [self markfaces:self.imageview]; }  -(void)markfaces:(uiimageview *)facepicture {     // draw ci image loaded face detection picture     ciimage* image = [ciimage imagewithcgimage:facepicture.image.cgimage];      // create face detector - since speed not issue we'll use high accuracy     // detector     cidetector* detector = [cidetector detectoroftype:cidetectortypeface                                               context:nil options:[nsdictionary dictionarywithobject:cidetectoraccuracyhigh forkey:cidetectoraccuracy]];  //    cgaffinetransform transform = cgaffinetransformmakescale(1, -1);     cgaffinetransform transform = cgaffinetransformmakescale(self.view.frame.size.width/mainimage.size.width, -self.view.frame.size.height/mainimage.size.height);     transform = cgaffinetransformtranslate(transform, 0, -self.imageview.bounds.size.height);      // create array containing detected faces detector     nsdictionary* imageoptions = [nsdictionary dictionarywithobject:[nsnumber numberwithint:6] forkey:cidetectorimageorientation];     nsarray* features = [detector featuresinimage:image options:imageoptions]; //    nsarray* features = [detector featuresinimage:image];      nslog(@"marking faces: count: %d", [features count]);      // we'll iterate through every detected face.  cifacefeature provides     // width entire face, , coordinates of each eye     // , mouth if detected.  provided bool's eye's ,     // mouth can check if exist.     for(cifacefeature* facefeature in features)     {           // create uiview using bounds of face //        uiview* faceview = [[uiview alloc] initwithframe:facefeature.bounds];         cgrect facerect = cgrectapplyaffinetransform(facefeature.bounds, transform);          // width of face //        cgfloat facewidth = facefeature.bounds.size.width;         cgfloat facewidth = facerect.size.width;          // create uiview using bounds of face         uiview *faceview = [[uiview alloc] initwithframe:facerect];          // add border around newly created uiview         faceview.layer.borderwidth = 1;         faceview.layer.bordercolor = [[uicolor redcolor] cgcolor];          // add new view create box around face         [self.imageview addsubview:faceview];         nslog(@"face -> x: %f, y: %f, w: %f, h: %f",facerect.origin.x, facerect.origin.y, facerect.size.width, facerect.size.height);          if(facefeature.haslefteyeposition)         {              // create uiview size based on width of face             cgpoint lefteye = cgpointapplyaffinetransform(facefeature.lefteyeposition, transform);             uiview* lefteyeview = [[uiview alloc] initwithframe:cgrectmake(lefteye.x-facewidth*0.15, lefteye.y-facewidth*0.15, facewidth*0.3, facewidth*0.3)];             // change background color of eye view             [lefteyeview setbackgroundcolor:[[uicolor bluecolor] colorwithalphacomponent:0.3]];             // set position of lefteyeview based on face             [lefteyeview setcenter:lefteye];             // round corners             lefteyeview.layer.cornerradius = facewidth*0.15;             // add view window             [self.imageview addsubview:lefteyeview];             nslog(@"has left eye -> x: %f, y: %f",lefteye.x, lefteye.y);         }          if(facefeature.hasrighteyeposition)         {              // create uiview size based on width of face             cgpoint righteye = cgpointapplyaffinetransform(facefeature.righteyeposition, transform);             uiview* lefteye = [[uiview alloc] initwithframe:cgrectmake(righteye.x-facewidth*0.15, righteye.y-facewidth*0.15, facewidth*0.3, facewidth*0.3)];             // change background color of eye view             [lefteye setbackgroundcolor:[[uicolor yellowcolor] colorwithalphacomponent:0.3]];             // set position of righteyeview based on face             [lefteye setcenter:righteye];             // round corners             lefteye.layer.cornerradius = facewidth*0.15;             // add new view window             [self.imageview addsubview:lefteye];             nslog(@"has right eye -> x: %f, y: %f", righteye.x, righteye.y);         }  //        if(facefeature.hasmouthposition) //        { //            // create uiview size based on width of face //            uiview* mouth = [[uiview alloc] initwithframe:cgrectmake(facefeature.mouthposition.x-facewidth*0.2, facefeature.mouthposition.y-facewidth*0.2, facewidth*0.4, facewidth*0.4)]; //            // change background color mouth green //            [mouth setbackgroundcolor:[[uicolor greencolor] colorwithalphacomponent:0.3]]; //            // set position of mouthview based on face //            [mouth setcenter:facefeature.mouthposition]; //            // round corners //            mouth.layer.cornerradius = facewidth*0.2; //            // add new view window //            [self.imageview addsubview:mouth]; //        }     } } 

i know code segment little long thats main gist of it. other thing relevant have uiimagepickercontroller gives user option pick existing image or take new one. image set uiimageview of screen displayed along various boxes , circles no luck show them :/

any appreciated. thank~

update:

ive added photo of guys can have idea, ive applied new scaling works little better near want do.

wrong face , eye placement

just use code apple's squarecam app. aligns square correctly in orientation both front , rear cameras. interpolate along facerect correct eye , mouth positions. note: have swap x position y position face feature. not sure why have swap gives correct positions.


Comments

Popular posts from this blog

jquery - How can I dynamically add a browser tab? -

node.js - Getting the socket id,user id pair of a logged in user(s) -

keyboard - C++ GetAsyncKeyState alternative -