I am developing an app using a data cable to link a camera. When I enter the page for the first time, I can detect the camera device, and then when I exit the page and enter again, I cannot detect the linked camera.
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
self.view.backgroundColor = [UIColor whiteColor];
[self addImageCaptureCore];
}
- (void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[self checkCameraConnection];
});
}
- (void)checkCameraConnection {
if (@available(iOS 13.0, *)) {
NSArray<ICDevice *> *connectedDevices = self.browser.devices;
if (connectedDevices.count > 0) {
NSLog(@"Camera is connected");
} else {
NSLog(@"Camera is not connected");
}
}
else {
// Fallback on earlier versions
}
}
- (void)viewWillDisappear:(BOOL)animated {
[super viewWillDisappear:animated];
if (@available(iOS 13.0, *)) {
if (self.cameraDevice) {
if (self.cameraDevice.hasOpenSession) {
[self.cameraDevice requestCloseSession];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[self.browser stop];
self.browser.delegate = nil;
self.browser = nil;
});
}
else {
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[self.browser stop];
self.browser.delegate = nil;
self.browser = nil;
});
}
}
} else {
// Fallback on earlier versions
}
}
- (void)addImageCaptureCore {
if (@available(iOS 13.0, *)) {
ICDeviceBrowser *browser = [[ICDeviceBrowser alloc] init];
browser.delegate = self;
[browser start];
self.browser = browser;
}
else {
}
}
#pragma mark - ICDeviceBrowserDelegate
- (void)deviceBrowser:(ICDeviceBrowser*)browser didAddDevice:(ICDevice*)device moreComing:(BOOL) moreComing API_AVAILABLE(ios(13.0)){
NSLog(@"Device name = %@",device.name);
if ([device isKindOfClass:[ICCameraDevice class]]) {
if ([device.capabilities containsObject:ICCameraDeviceCanAcceptPTPCommands]) {
ICCameraDevice *cameraDevice = (ICCameraDevice *)device;
cameraDevice.delegate = self;
[cameraDevice requestOpenSession];
self.cameraDevice = cameraDevice;
}
}
}
- (void)deviceBrowser:(ICDeviceBrowser*)browser didRemoveDevice:(ICDevice*)device moreGoing:(BOOL) moreGoing API_AVAILABLE(ios(13.0)){
if (self.cameraDevice) {
if (self.cameraDevice.hasOpenSession) {
[self.cameraDevice requestCloseSession];
self.cameraDevice.delegate = nil;
self.cameraDevice = nil;
}
else {
self.cameraDevice.delegate = nil;
self.cameraDevice = nil;
}
}
}
#pragma mark - ICCameraDeviceDelegate
- (void)cameraDevice:(ICCameraDevice*)camera didAddItems:(NSArray<ICCameraItem*>*) items API_AVAILABLE(ios(13.0)){
if (items.count > 0) {
ICCameraItem *latestItem = items.lastObject;
NSLog(@"name = %@",latestItem.name);
}
}
#pragma mark - ICDeviceDelegate
- (void)device:(ICDevice*)device didOpenSessionWithError:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"Failed to open session %@",error.localizedDescription);
}
else {
NSLog(@"open session success");
}
}
- (void)device:(ICDevice*)device didCloseSessionWithError:(NSError* _Nullable)error API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"close session error = %@",error.localizedDescription);
}
else {
NSLog(@"didCloseSession");
}
}
- (void)didRemoveDevice:(ICDevice*)device {
}
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
58 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have trained a model to classify some symbols using Create ML.
In my app I am using VNImageRequestHandler and VNCoreMLRequest to classify image data.
If I use a CVPixelBuffer obtained from an AVCaptureSession then the classifier runs as I would expect. If I point it at the symbols it will work fairly accurately, so I know the model is trained fairly correctly and works in my app.
If I try to use a cgImage that is obtained by cropping a section out of a larger image (from the gallery), then the classifier does not work. It always seems to return the same result (although the confidence is not a 1.0 and varies for each image, it will be to within several decimal points of it, eg 9.9999).
If I pause the app when I have the cropped image and use the debugger to obtain the cropped image (via the little eye icon and then open in preview), then drop the image into the Preview section of the MLModel file or in Create ML, the model correctly classifies the image.
If I scale the cropped image to be the same size as I get from my camera, and convert the cgImage to a CVPixelBuffer with same size and colour space to be the same as the camera (1504, 1128, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) then I get some difference in ouput, it's not accurate, but it returns different results if I specify the 'centerCrop' or 'scaleFit' options. So I know that 'something' is happening, but it's not the correct thing.
I was under the impression that passing a cgImage to the VNImageRequestHandler would perform the necessary conversions, but experimentation shows this is not the case. However, when using the preview tool on the model or in Create ML this conversion is obviously being done behind the scenes because the cropped part is being detected.
What am I doing wrong.
tl;dr
my model works, as backed up by using video input directly and also dropping cropped images into preview sections
passing the cropped images directly to the VNImageRequestHandler does not work
modifying the cropped images can produce different results, but I cannot see what I should be doing to get reliable results.
I'd like my app to behave the same way the preview part behaves, I give it a cropped part of an image, it does some processing, it goes to the classifier, it returns a result same as in Create ML.
I'm working on a very simple App where I need to visualize an image on the screen of an iPhone. However, the image has some special properties. It's a 16bit, yuv422_yuy2 encoded image. I already have all the raw bytes saved in a Data object.
After googling for a long time, I still did not figure out the correct way. My current understanding is first create a CVPixelBuffer to properly represent the encoding information. Then conver the CVPixelBuffer to an UIImage. The following is my current implementation.
public func YUV422YUY2ToUIImage(data: Data, height: Int, width: Int, bytesPerRow: Int) -> UIImage {
return rosImage.data.withUnsafeMutableBytes { rawPointer in
let baseAddress = rawPointer.baseAddress!
let tempBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1)
CVPixelBufferCreateWithBytes( kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_422YpCbCr16,
baseAddress,
bytesPerRow,
nil,
nil,
nil,
tempBufferPointer)
let ciImage = CIImage(cvPixelBuffer: tempBufferPointer.pointee!)
return UIImage(ciImage: ciImage)
}
}
However, when I execute the code, I have the followin error
-[CIImage initWithCVPixelBuffer:options:] failed because its pixel format v216 is not supported.
So it seems CIImage is unhappy. I think I need to convert the encoding from yuv422_yuy2 to something like plain ARGB. But after a long tim googling, I didn't find a way to do that. The closest function I cand find is https://developer.apple.com/documentation/accelerate/1533015-vimageconvert_422cbypcryp16toarg
But the function is too complex for me to understand how to use it.
Any help is appreciated. Thank you!
How to extract an object from a picture or remove the background of an object just like you can create stickers in Photos app. Is there any other official model or library other than using some website's API? (DeepLabV3.mlmodel cannot infer what I need)
Using the screencapture CLI on macOS Sonoma 14.0 (23A344) results in a 72dpi image file, no matter if it was captured on a retina display or not.
For example, using
screencapture -i ~/Desktop/test.png in Terminal lets me create a selective screenshot, but the resulting file does not contain any DPI metadata (checked using mdls ~/Desktop/test.png), nor does the image itself have the correct DPI information (should be 144, but it's always 72; checked using Preview.app).
I noticed a (new?) flag option, -r, for which the documentation states:
-r Do not add screen dpi meta data to captured file.
Is that flag somehow automatically set? Setting it myself makes no difference and obviously results in a no-dpi-in-metadata and wrong-dpi-in-image file.
The only two ways I got the correct DPI information in a resulting image file was using the default options (forced by -p): screencapture -i -p, and by making the capture go to the clipboard screencapture -i -c. Sadly, I can't use those in my case.
Feedback filed: FB13208235
I'd appreciate any pointers,
Matthias
I want to use CoreML to process video data. The ML model will take multiple frames as input. How should I get multi frames from ios and process it?
Thanks in advance for any suggestions.
I have 3d image but when I insert iton my project its come without colors , any noe knows why?
I take a picture using the iPhone's camera. The taken resolution is 3024.0 x 4032. I then have to apply a watermark to this image. After a bunch of trial and error, the method I decided to use was taking a snapshot of a watermark UIView, and drawing that over the image, like so:
// Create the watermarked photo.
let result: UIImage=UIGraphicsImageRenderer(size: image.size).image(actions: { _ in
image.draw(in: .init(origin: .zero, size: image.size))
let watermark: Watermark = .init(
size: image.size,
scaleFactor: image.size.smallest / self.frame.size.smallest
)
watermark.drawHierarchy(in: .init(origin: .zero, size: image.size), afterScreenUpdates: true)
})
Then with the final image — because the client wanted it to have a filename as well when viewed from within the Photos app and exported from it, and also with much trial and error — I save it to a file in a temporary directory. I then save it to the user's Photo library using that file. The difference as compared to saving the image directly vs saving it from the file is that when saved from the file the filename is used as the filename within the Photos app; and in the other case it's just a default photo name generated by Apple.
The problem is that in the image saving code I'm getting the following error:
[Metal] 9072 by 12096 iosurface is too large for GPU
And when I view the saved photo it's basically just a completely black image. This problem only started when I changed the AVCaptureSession preset to .photo. Before then there was no errors.
Now, the worst problem is that the app just completely crashes on drawing of the watermark view in the first place. When using .photo the resolution is significantly higher, so the image size is larger, so the watermark size has to be commensurately larger as well. iOS appears to be okay with the size of the watermark UIView. However, when I try to draw it over the image the app crashes with this message from Xcode:
So there's that problem. But I figured that could be resolved by taking a more manual approach to the drawing of the watermark then using a UIView snapshot. So it's not the most pressing problem. What is, is that even after the drawing code is commented out, I still get the iosurface is too large error.
Here's the code that saves the image to the file and then to the Photos library:
extension UIImage {
/// Save us with the given name to the user's photo album.
/// - Parameters:
/// - filename: The filename to be used for the saved photo. Behavior is undefined if the filename contain characters other than what is represented by this regular expression [A-Za-z0-9-_]. A decimal point for the file extension is permitted.
/// - location: A GPS location to save with the photo.
fileprivate func save(_ filename: String, _ location: Optional<Coordinates>) throws {
// Create a path to a temporary directory. Adding filenames to the Photos app form of images is accomplished by first creating an image file on the file system, saving the photo using the URL to that file, and then deleting that file on the file system.
// A documented way of adding filenames to photos saved to Photos was never found.
// Furthermore, we save everything to a `tmp` directory as if we just tried deleting individual photos after they were saved, and the deletion failed, it would be a little more tricky setting up logic to ensure that the undeleted files are eventually
// cleaned up. But by using a `tmp` directory, we can save all temporary photos to it, and delete the entire directory following each taken picture.
guard
let tmpUrl: URL=try {
guard let documentsDirUrl=NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first else {
throw GeneralError("Failed to create URL to documents directory.")
}
let url: Optional<URL> = .init(string: documentsDirUrl + "/tmp/")
return url
}()
else {
throw GeneralError("Failed to create URL to temporary directory.")
}
// A path to the image file.
let filePath: String=try {
// Reduce the likelihood of photos taken in quick succession from overwriting each other.
let collisionResistantPath: String="\(tmpUrl.path(percentEncoded: false))\(UUID())/"
// Make sure all directories required by the path exist before trying to write to it.
try FileManager.default.createDirectory(atPath: collisionResistantPath, withIntermediateDirectories: true, attributes: nil)
// Done.
return collisionResistantPath + filename
}()
// Create `CFURL` analogue of file path.
guard let cfPath: CFURL=CFURLCreateWithFileSystemPath(nil, filePath as CFString, CFURLPathStyle.cfurlposixPathStyle, false) else {
throw GeneralError("Failed to create `CFURL` analogue of file path.")
}
// Create image destination object.
//
// You can change your exif type here.
// This is a note from original author. Not quite exactly sure what they mean by it. Link in method documentation can be used to refer back to the original context.
guard let destination=CGImageDestinationCreateWithURL(cfPath, UTType.jpeg.identifier as CFString, 1, nil) else {
throw GeneralError("Failed to create `CGImageDestination` from file url.")
}
// Metadata properties.
let properties: CFDictionary={
// Place your metadata here.
// Keep in mind that metadata follows a standard. You can not use custom property names here.
let tiffProperties: Dictionary<String, Any>=[:]
return [
kCGImagePropertyExifDictionary as String: tiffProperties
] as CFDictionary
}()
// Create image file.
guard let cgImage=self.cgImage else {
throw GeneralError("Failed to retrieve `CGImage` analogue of `UIImage`.")
}
CGImageDestinationAddImage(destination, cgImage, properties)
CGImageDestinationFinalize(destination)
// Save to the photo library.
PHPhotoLibrary.shared().performChanges({
guard let creationRequest: PHAssetChangeRequest = .creationRequestForAssetFromImage(atFileURL: URL(fileURLWithPath: filePath)) else {
return
}
// Add metadata to the photo.
creationRequest.creationDate = .init()
if let location=location {
creationRequest.location = .init(latitude: location.latitude, longitude: location.longitude)
}
}, completionHandler: { _, _ in
try? FileManager.default.removeItem(atPath: tmpUrl.absoluteString)
})
}
}
If anyone can provide some insight as to what's causing the iosurface is too large error and what can be done to resolve it, that'd be awesome.
hdiutiul bug?
When making a DMG image for the whole content of user1 profile (meaning using srcFolder = /Users/user1) using hdiutil, the program fails indicating:
/Users/user1/Library/VoiceTrigger: Operation not permitted hdiutil: create failed - Operation not permitted
The complete command used was: "sudo hdiutil create -srcfolder /Users/user1 -skipunreadable -format UDZO /Volumes/testdmg/test.dmg"
And, of course, the user had local admin rights. I was using Sonoma 14.2.1 and a MacBook Pro (Intel T2)
What I would have expected, asuming that /VoiceTrigger cannot be copied for whatever reason, would be to skip that file or folder and continue the process. Then, at the end, produce a log listing the files/folders not included and the reason for their exclusion. The fact that hdiutil just ended inmediately, looks to me as a bug. Or what else could explain the problem described?
Hi Team,
I would like to Crete a line code that would help me to take whatever image in my ***** in Xcode and what ever the size of the image I would like this image to be resize to fit in a circle
I use a data cable to connect my Nikon camera to my iPhone. In my project, I use the framework ImageCaptureCore. Now I can read the photos in the camera memory card, but when I press the shutter of the camera to take a picture, the camera does not respond, the connection between the camera and the phone is normal. Then the camera screen shows a picture of a laptop. I don't know. Why is that? I hope someone can help me.
I found that the app reported a crash of a pure virtual function call, which could not be reproduced.
A third-party library is referenced:
https://github.com/lincf0912/LFPhotoBrowser
Achieve smearing, blurring, and mosaic processing of images
Crash code:
if (![LFSmearBrush smearBrushCache]) {
[_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear];
CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size;
[LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) {
[weakToolBar setSplashWait:NO index:LFSplashStateType_Smear];
}];
}
- (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler
{
CIContext *context = LFBrush_CIContext;
NSAssert(context != nil, @"This method must be called using the LFBrush class.");
CIImage *midImage = [CIImage imageWithCGImage:self.CGImage];
midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]];
midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)];
if (orientation > 0 && orientation < 9) {
midImage = [midImage imageByApplyingOrientation:orientation];
}
//图片开始处理
CIImage *result = midImage;
if (filterHandler) {
CIFilter *filter = filterHandler(midImage);
if (filter) {
result = filter.outputImage;
}
}
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
UIImage *image = [UIImage imageWithCGImage:outImage];
CGImageRelease(outImage);
return image;
}
This line trigger crash:
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash
- (void)cameraDevice:(ICCameraDevice*)camera
didReceiveMetadata:(NSDictionary* _Nullable)metadata
forItem:(ICCameraItem*)item
error:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){
NSLog(@"metadata = %@",metadata);
if (item) {
ICCameraFile *file = (ICCameraFile *)item;
NSURL *downloadsDirectoryURL = [[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask].firstObject;
downloadsDirectoryURL = [downloadsDirectoryURL URLByAppendingPathComponent:@"Downloads"];
NSDictionary *downloadOptions = @{ ICDownloadsDirectoryURL: downloadsDirectoryURL,
ICSaveAsFilename: item.name,
ICOverwrite: @YES,
ICDownloadSidecarFiles: @YES
};
[self.cameraDevice requestDownloadFile:file options:downloadOptions downloadDelegate:self didDownloadSelector:@selector(didDownloadFile:error:options:contextInfo:) contextInfo:nil];
}
}
- (void)didDownloadFile:(ICCameraFile *)file
error:(NSError* _Nullable)error
options:(NSDictionary<NSString*, id>*)options
contextInfo:(void* _Nullable) contextInfo API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"Download failed with error: %@", error);
}
else {
NSLog(@"Download completed for file: %@", file);
}
}
I don't know what's wrong. I don't know if this is the right way to get the camera pictures. I hope someone can help me
Is it impossible to write tEXt chunk data to PNG on iOS?
I successfully read the chunk data and modified it to add tEXt data to the chunk and saved it as an image in Gallery,
But the tEXt data keeps disappearing when I read the chunk data from the image in the Gallery.
Does iOS prevent preserving tEXt data when saving an image to Gallery?
I have applied some filters (like applyingGaussianBlur) to a CIImage that was converted from UIImage. The resulting image data gets corrupted only in lower end devices. What could be the reason?
I want to read metadata of image files such as copyright, author etc.
I did a web search and the closest thing is CGImageSourceCopyPropertiesAtIndex:
- (void)tableViewSelectionDidChange:(NSNotification *)notif {
NSDictionary* metadata = [[NSDictionary alloc] init];
//get selected item
NSString* rowData = [fileList objectAtIndex:[tblFileList selectedRow]];
//set path to file selected
NSString* filePath = [NSString stringWithFormat:@"%@/%@", objPath, rowData];
//declare a file manager
NSFileManager* fileManager = [[NSFileManager alloc] init];
//check to see if the file exists
if ([fileManager fileExistsAtPath:filePath] == YES) {
//escape all the garbage in the string
NSString *percentEscapedString = (NSString *)CFURLCreateStringByAddingPercentEscapes(NULL, (CFStringRef)filePath, NULL, NULL, kCFStringEncodingUTF8);
//convert path to NSURL
NSURL* filePathURL = [[NSURL alloc] initFileURLWithPath:percentEscapedString];
NSError* error;
NSLog(@"%@", [filePathURL checkResourceIsReachableAndReturnError:error]);
//declare a cg source reference
CGImageSourceRef sourceRef;
//set the cg source references to the image by passign its url path
sourceRef = CGImageSourceCreateWithURL((CFURLRef)filePathURL, NULL);
//set a dictionary with the image metadata from the source reference
metadata = (NSDictionary *)CGImageSourceCopyPropertiesAtIndex(sourceRef,0,NULL);
NSLog(@"%@", metadata);
[filePathURL release];
} else {
[self showAlert:@"I cannot find this file."];
}
[fileManager release];
}
Is there any better or easy approach than this?
Webp images work on iOS and iPadOS. But it doesn't work on tvOS.
In fact, Apple say that:
But why doesn't it happen? Is there a way that it works on tvOS?
guard let rawfilter = CoreImage.CIRAWFilter(imageData: data, identifierHint: nil) else { return }
guard let ciImage = rawfilter.outputImage else { return }
let width = Int(ciImage.extent.width)
let height = Int(ciImage.extent.height)
let rect = CGRect(x: 0, y: 0, width: width, height: height)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: rect, format: .RGBA16, colorSpace: CGColorSpaceCreateDeviceRGB()) else { return }
print("cgImage prepared")
guard let dataProvider = cgImage.dataProvider else { return }
let rgbaData = CFDataCreateMutableCopy(kCFAllocatorDefault, 0, dataProvider.data)
In iOS 16 this process is much faster than the same process in iOS 17
Is there a method to boost up the decoding speed?
I am able to create a UIImage from webP data using UIImage(data: data) on iOS and iPadOS. When I try to do this same thing on watchOS 10, it fails. Is there a workaround to displaying webp images on watch os if this isn't expected to work?
It appears I can't add a WebP image as an Image Set in an Asset Catalog. Is that correct?
As a workaround, I added the WebP image as a Data Set. I'm then loading it as a CGImage with the following code:
guard let asset = NSDataAsset(name: imageName),
let imageSource = CGImageSourceCreateWithData(asset.data as CFData, nil),
let image = CGImageSourceCreateImageAtIndex(imageSource, 0, nil)
else {
return nil
}
// Use image
Is it fine to store and load WebP images in this way? If not, then what's best practice?