As I know, in order to convert the footage from the camera into the picture you need to use the method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection I also learned that it was impossible to call it explicitly, only through the delegate and through the queue. This is not clear. For example, the code from Apple, I do not know how to embed it in a project, so that it can work and play around with image processing. From a number of projects on a githaba, I saw that this method is sometimes only announced in a header file ( .h ), without implementation. (In general, the head does not fit).
Question: How to use this method correctly? In which file to write the queue correctly, and in which method itself? If there is somewhere in the network a short project in which this method works out - please give the link.