Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Image Playground Error: Cannot find protocol declaration for 'ImageGenerationViewControllerDelegate'
@available(macCatalyst 18.1, *) @available(iOS 18.1, *) extension CKImageSelectionManager: ImagePlaygroundViewController.Delegate { public func imagePlaygroundViewController(_ imagePlaygroundViewController: ImagePlaygroundViewController, didCreateImageAt imageURL: URL) { } func presentImagePlayground() { let imagePlaygroundVC = ImagePlaygroundViewController() // Set delegate to self to receive the callback imagePlaygroundVC.delegate = self imagePlaygroundVC.isModalInPresentation = true // Prevents dismissal with swipe if needed self.delegate?.presentImageSelectionViewController(imagePlaygroundVC) } } This generates an error in the xcode generated swift header.
3
0
1.1k
Nov ’24
Guidance Implementing IndexedEntity and CSSearchableItemAttributeSet
I am working to add Spotlight indexing for my app entities as discussed in WWDC24's video "What's New in App Intents". That video goes over the IndexedEntity protocol and the integration with Spotlight via CSSearchableItemAttributeSet. What I'm seeing though does not match the video. In the video, the presenter goes through the sort of progressive approach you can take to getting this data into Spotlight starting with the basics and then expanding to include more support depending on how much the developer wants to do. What I'm seeing is that if you conform to IndexedEntity, your entities will appear in Spotlight using the name derived from public var displayRepresentation: DisplayRepresentation So, that works. Name appears... BUT the next part of the video goes into how to expand your implementation with more metadata for Spotlight via CSSearchableItemAttributeSet. The issue I'm seeing is that once that's implemented, the items disappear from Spotlight, almost like that implementation is overriding the base implementation in a way that no longer functions. My expectation is that an item with custom attributes would use them in Spotlight as appropriate, not disappear from search, i.e. what's shown in the video should work. I've got a sample project here: https://hanchor.s3.amazonaws.com/misc/IndexingTest.zip To reproduce with the sample: Build and run. Indexing is setup in the init() method so it will just run. Go to Spotlight and search for 'Huntersblau', a string included in the content set. At this point you should see a result - good! Stop the app and go back and uncomment the var attributeSet: CSSearchableItemAttributeSet implementation in IndexingTestApp.swift. This will provide custom attributes to Spotlight. Repeat steps 1 and 2 - you'll see now, it no longer appears in the search results - when CSSearchableItemAttributeSet is implemented, the item drops out of Spotlight.
3
0
1.1k
Nov ’24
Depth Anything V2 Core ML Model not working with Xcode 16.1
https://developer.apple.com/machine-learning/models/ Adding the DepthAnythingV2SmallF16.mlpackage to a new project in Xcode 16.1 and invoking the class crashes the app. Anyone else having the same issue? I tried Xcode 16.2 beta and it has the same response. Code import UIKit import CoreML class ViewController : UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. do { // Use a default model configuration. let defaultConfig = MLModelConfiguration() // app crashes here let model = try? DepthAnythingV2SmallF16( configuration: defaultConfig ) } catch { // } } } Response /AppleInternal/Library/BuildRoots/4b66fb3c-7dd0-11ef-b4fb-4a83e32a47e1/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:129: failed assertion Error: unhandled platform for MPSGraph serialization' `
1
0
776
Nov ’24
Help with TensorFlow to CoreML Conversion: AttributeError: 'float' object has no attribute 'astype'
Hello, I’m attempting to convert a TensorFlow model to CoreML using the coremltools package, but I’m encountering an error during the conversion process. The error traceback points to an issue within the Cast operation in the MIL (Model Intermediate Layer) when it tries to perform type inference: AttributeError: 'float' object has no attribute 'astype' Here is the relevant part of the error traceback: File ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/elementwise_unary.py", line 896, in get_cast_value return input_var.val.astype(dtype=type_map[dtype_val]) I’ve tried converting a model from the yamnet-tensorflow2 repository, and this error occurs when CoreML tries to cast a float type during the conversion of certain operations. I’m currently using Python 3.10 and coremltools version 6.0.1, with TensorFlow 2.x. Has anyone encountered a similar issue or can offer suggestions on how to resolve this? I’ve also considered that this might be related to mismatches in the model’s data types, but I’m not sure how to proceed. Platform and package versions: coremltools 6.1 tensorflow 2.10.0 tensorflow-estimator 2.10.0 tensorflow-hub 0.16.1 tensorflow-io-gcs-filesystem 0.37.1 Python 3.10.12 pip 24.3.1 from ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/pip (python 3.10) Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64 Any help or pointers would be greatly appreciated!
2
0
1.1k
Nov ’24
CoreML - doUnloadModel:options:qos:error
I have a model that uses a CoreML delegate, and I’m getting the following warning whenever I set the model to nil. My understanding is that CoreML is creating a cache in the app’s storage but is having issues clearing it. As a result, the app’s storage usage increases every time the model is loaded. This StackOverflow post explains the problem in detail: App Storage Size Increases with CoreML usage This is a critical issue because the cache will eventually fill up the phone’s storage: doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/mobile/Containers/Data/Application/22DDB13E-DABA-4195-846F-F884135F37FE/tmp/F38A9824-3944-420C-BD32-78CE598BE22D-10125-00000586EFDFD7D6.mlmodelc/ : sourceURL= (null) : key={"isegment":0,"inputs":{"0_0":{"shape":[256,256,1,3,1]}},"outputs":{"142_0":{"shape":[16,16,1,222,1]},"138_0":{"shape":[16,16,1,111,1]}}} : identifierSource=0 : cacheURLIdentifier=E0CD0F44FB0417936057FC6375770CFDCCC8C698592ED412DDC9C81E96256872_C9D6E5E73302943871DC2C610588FEBFCB1B1D730C63CA5CED15D2CD5A0AC0DA : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 } : state=3 : programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 : attr={ ANEFModelDescription = { ANEFModelInput16KAlignmentArray = ( ); ANEFModelOutput16KAlignmentArray = ( ); ANEFModelProcedures = ( { ANEFModelInputSymbolIndexArray = ( 0 ); ANEFModelOutputSymbolIndexArray = ( 0, 1 ); ANEFModelProcedureID = 0; } ); kANEFModelInputSymbolsArrayKey = ( "0_0" ); kANEFModelOutputSymbolsArrayKey = ( "138_0@output", "142_0@output" ); kANEFModelProcedureNameToIDMapKey = { net = 0; }; }; NetworkStatusList = ( { LiveInputList = ( { BatchStride = 393216; Batches = 1; Channels = 3; Depth = 1; DepthStride = 393216; Height = 256; Interleave = 1; Name = "0_0"; PlaneCount = 3; PlaneStride = 131072; RowStride = 512; Symbol = "0_0"; Type = Float16; Width = 256; } ); LiveOutputList = ( { BatchStride = 113664; Batches = 1; Channels = 111; Depth = 1; DepthStride = 113664; Height = 16; Interleave = 1; Name = "138_0@output"; PlaneCount = 111; PlaneStride = 1024; RowStride = 64; Symbol = "138_0@output"; Type = Float16; Width = 16; }, { BatchStride = 227328; Batches = 1; Channels = 222; Depth = 1; DepthStride = 227328; Height = 16; Interleave = 1; Name = "142_0@output"; PlaneCount = 222; PlaneStride = 1024; RowStride = 64; Symbol = "142_0@output"; Type = Float16; Width = 16; } ); Name = net; } ); } : perfStatsMask=0} was not loaded by the client.
4
0
772
Nov ’24
Playgroud
Woke up to a notification saying playground, Genmoji…etc was ready. but every time I try to use it says early access was requested. Anyone else had this issue? if so how did you fix it?
0
0
319
Nov ’24
Example Usage of sliceUpdateDataTensor
Where can I find an example of using this MPSGraph function? I'm trying to use it to paste an image into a larger canvas at certain coordinates. func sliceUpdateDataTensor( _ dataTensor: MPSGraphTensor, update updateTensor: MPSGraphTensor, starts: [NSNumber], ends: [NSNumber], strides: [NSNumber], startMask: UInt32, endMask: UInt32, squeezeMask: UInt32, name: String? ) -> MPSGraphTensor
0
0
515
Nov ’24
Image Playground
How long does it usually take to get access to image playground. Its been about a week since I got IOS 18.2 public beta and still am waiting for access to the image playground. When I got apple intelligence only took a few hours.
3
0
546
Nov ’24
Unsupported type in JAX metal PJRT plugin with rng_bit_generator
Hi all, When executing an HLO program using the JAX metal PJRT plugin, the program fails due to an unsupported data type returned by the rng_bit_generator operation. The generated HLO includes: %output_state, %output = "mhlo.rng_bit_generator"(%1) <{rng_algorithm = #mhlo.rng_algorithm<PHILOX>}> : (tensor<3xi64>) -> (tensor<3xi64>, tensor<3xui32>) The error message indicates that: Metal only supports MPSDataTypeFloat16, MPSDataTypeBFloat16, MPSDataTypeFloat32, MPSDataTypeInt32, and MPSDataTypeInt64. The use of ui32 seems to be incompatible with Metal’s allowed types. I’m trying to understand if the ui32 output is the problem or maybe the use of rng_bit_generator is wrong. Could you clarify if there is a workaround or planned support for ui32 output in this context? Alternatively, guidance on configuring rng_bit_generator for compatibility with Metal’s supported types would be greatly appreciated.
0
0
509
Nov ’24
Feasibility of Real-Time Object Detection in Live Video with Core ML on M1 Pro and A-Series Chips
Hello, I am exploring real-time object detection, and its replacement/overlay with another shape, on live video streams for an iOS app using Core ML and Vision frameworks. My target is to achieve high-speed, real-time detection without noticeable latency, similar to what’s possible with PageFault handling and Associative Caching in OS, but applied to video processing. Given that this requires consistent, real-time model inference, I’m curious about how well the Neural Engine or GPU can handle such tasks on A-series chips in iPhones versus M-series chips (specifically M1 Pro and possibly M4) in MacBooks. Here are a few specific points I’d like insight on: Hardware Suitability: How feasible is it to perform real-time object detection with Core ML on the Neural Engine (i.e., can it maintain low latency)? Would the M-series chips (e.g., M1 Pro or newer) offer a tangible benefit for this type of task compared to the A-series in mobile devices? Which A- and M- chips would be minimum feasible recommendation for such task. Performance Expectations: For continuous, live video object detection, what would be the expected frame rate or latency using an optimized Core ML model? Has anyone benchmarked such applications, and is the M-series required to achieve smooth, real-time processing? Differences Across Apple Hardware: How does performance scale between the A-series Neural Engine and M-series GPU and Neural Engine? Is the M-series vastly superior for real-time Core ML tasks like object detection on live video feeds? If anyone has attempted live object detection on these chips, any insights on real-time performance, limitations, or optimizations would be highly appreciated. Please refer: Apple APIs Thank you in advance for your help!
3
0
761
Nov ’24
Is it possible to set writingToolsBehavior globally?
Hello, we're investigating an option to disable writing tools for some customers in our app. I'm aware of the writingToolsBehavior property for UITextView etc, but we would like to have a way to set this globally without having to update all UITextView instances (or future instances). Is there any API to do this? We tried using UITextView.appearance.writingToolsBehavior = .none and it seemed promising on 18.2 beta, however it introduced crashes on devices running 18.1. The crashes look like: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UITextView: 0x14462c000; frame = (0 0; 0 0); text = ''; userInteractionEnabled = NO; gestureRecognizers = <NSArray: 0x30067cb40>; backgroundColor = UIExtendedGrayColorSpace 0 0; layer = <CALayer: 0x3009b1ba0>; contentOffset: {0, 0}; contentSize: {0, 0}; adjustedContentInset: {0, 0, 0, 0}> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = { "setWritingToolsBehavior:" = ( ); }' Similarly, even on 18.2 beta if we used UITextField.appearance.writingToolsBehavior = .none we would see crashes for any search fields like: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UISearchBarTextField: 0x141c04a00; frame = (0 0; 0 0); text = ''; opaque = NO; gestureRecognizers = <NSArray: 0x301fe15c0>; placeholder = Search Leads; borderStyle = RoundedRect; background = <_UITextFieldSystemBackgroundProvider: 0x3015de960: backgroundView=<_UISearchBarSearchFieldBackgroundView: 0x141c60200; frame = (0 0; 0 0); opaque = NO; autoresize = W+H; userInteractionEnabled = NO; layer = <CALayer: 0x3015de8e0>>, fillColor=(null), textfield=<UISearchBarTextField: 0x141c04a00>>; layer = <CALayer: 0x3015de240>> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = { "setWritingToolsBehavior:" = ( ); }' Is it possible to set this globally?
0
3
419
Nov ’24