Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Playgroud
Woke up to a notification saying playground, Genmoji…etc was ready. but every time I try to use it says early access was requested. Anyone else had this issue? if so how did you fix it?
0
0
333
Nov ’24
Example Usage of sliceUpdateDataTensor
Where can I find an example of using this MPSGraph function? I'm trying to use it to paste an image into a larger canvas at certain coordinates. func sliceUpdateDataTensor( _ dataTensor: MPSGraphTensor, update updateTensor: MPSGraphTensor, starts: [NSNumber], ends: [NSNumber], strides: [NSNumber], startMask: UInt32, endMask: UInt32, squeezeMask: UInt32, name: String? ) -> MPSGraphTensor
0
0
520
Nov ’24
Image Playground
How long does it usually take to get access to image playground. Its been about a week since I got IOS 18.2 public beta and still am waiting for access to the image playground. When I got apple intelligence only took a few hours.
3
0
554
Nov ’24
Unsupported type in JAX metal PJRT plugin with rng_bit_generator
Hi all, When executing an HLO program using the JAX metal PJRT plugin, the program fails due to an unsupported data type returned by the rng_bit_generator operation. The generated HLO includes: %output_state, %output = "mhlo.rng_bit_generator"(%1) <{rng_algorithm = #mhlo.rng_algorithm<PHILOX>}> : (tensor<3xi64>) -> (tensor<3xi64>, tensor<3xui32>) The error message indicates that: Metal only supports MPSDataTypeFloat16, MPSDataTypeBFloat16, MPSDataTypeFloat32, MPSDataTypeInt32, and MPSDataTypeInt64. The use of ui32 seems to be incompatible with Metal’s allowed types. I’m trying to understand if the ui32 output is the problem or maybe the use of rng_bit_generator is wrong. Could you clarify if there is a workaround or planned support for ui32 output in this context? Alternatively, guidance on configuring rng_bit_generator for compatibility with Metal’s supported types would be greatly appreciated.
0
0
525
Nov ’24
Is it possible to set writingToolsBehavior globally?
Hello, we're investigating an option to disable writing tools for some customers in our app. I'm aware of the writingToolsBehavior property for UITextView etc, but we would like to have a way to set this globally without having to update all UITextView instances (or future instances). Is there any API to do this? We tried using UITextView.appearance.writingToolsBehavior = .none and it seemed promising on 18.2 beta, however it introduced crashes on devices running 18.1. The crashes look like: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UITextView: 0x14462c000; frame = (0 0; 0 0); text = ''; userInteractionEnabled = NO; gestureRecognizers = <NSArray: 0x30067cb40>; backgroundColor = UIExtendedGrayColorSpace 0 0; layer = <CALayer: 0x3009b1ba0>; contentOffset: {0, 0}; contentSize: {0, 0}; adjustedContentInset: {0, 0, 0, 0}> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = { "setWritingToolsBehavior:" = ( ); }' Similarly, even on 18.2 beta if we used UITextField.appearance.writingToolsBehavior = .none we would see crashes for any search fields like: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UISearchBarTextField: 0x141c04a00; frame = (0 0; 0 0); text = ''; opaque = NO; gestureRecognizers = <NSArray: 0x301fe15c0>; placeholder = Search Leads; borderStyle = RoundedRect; background = <_UITextFieldSystemBackgroundProvider: 0x3015de960: backgroundView=<_UISearchBarSearchFieldBackgroundView: 0x141c60200; frame = (0 0; 0 0); opaque = NO; autoresize = W+H; userInteractionEnabled = NO; layer = <CALayer: 0x3015de8e0>>, fillColor=(null), textfield=<UISearchBarTextField: 0x141c04a00>>; layer = <CALayer: 0x3015de240>> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = { "setWritingToolsBehavior:" = ( ); }' Is it possible to set this globally?
0
3
442
Nov ’24
CoreML - doUnloadModel:options:qos:error
I have a model that uses a CoreML delegate, and I’m getting the following warning whenever I set the model to nil. My understanding is that CoreML is creating a cache in the app’s storage but is having issues clearing it. As a result, the app’s storage usage increases every time the model is loaded. This StackOverflow post explains the problem in detail: App Storage Size Increases with CoreML usage This is a critical issue because the cache will eventually fill up the phone’s storage: doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/mobile/Containers/Data/Application/22DDB13E-DABA-4195-846F-F884135F37FE/tmp/F38A9824-3944-420C-BD32-78CE598BE22D-10125-00000586EFDFD7D6.mlmodelc/ : sourceURL= (null) : key={"isegment":0,"inputs":{"0_0":{"shape":[256,256,1,3,1]}},"outputs":{"142_0":{"shape":[16,16,1,222,1]},"138_0":{"shape":[16,16,1,111,1]}}} : identifierSource=0 : cacheURLIdentifier=E0CD0F44FB0417936057FC6375770CFDCCC8C698592ED412DDC9C81E96256872_C9D6E5E73302943871DC2C610588FEBFCB1B1D730C63CA5CED15D2CD5A0AC0DA : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 } : state=3 : programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 : attr={ ANEFModelDescription = { ANEFModelInput16KAlignmentArray = ( ); ANEFModelOutput16KAlignmentArray = ( ); ANEFModelProcedures = ( { ANEFModelInputSymbolIndexArray = ( 0 ); ANEFModelOutputSymbolIndexArray = ( 0, 1 ); ANEFModelProcedureID = 0; } ); kANEFModelInputSymbolsArrayKey = ( "0_0" ); kANEFModelOutputSymbolsArrayKey = ( "138_0@output", "142_0@output" ); kANEFModelProcedureNameToIDMapKey = { net = 0; }; }; NetworkStatusList = ( { LiveInputList = ( { BatchStride = 393216; Batches = 1; Channels = 3; Depth = 1; DepthStride = 393216; Height = 256; Interleave = 1; Name = "0_0"; PlaneCount = 3; PlaneStride = 131072; RowStride = 512; Symbol = "0_0"; Type = Float16; Width = 256; } ); LiveOutputList = ( { BatchStride = 113664; Batches = 1; Channels = 111; Depth = 1; DepthStride = 113664; Height = 16; Interleave = 1; Name = "138_0@output"; PlaneCount = 111; PlaneStride = 1024; RowStride = 64; Symbol = "138_0@output"; Type = Float16; Width = 16; }, { BatchStride = 227328; Batches = 1; Channels = 222; Depth = 1; DepthStride = 227328; Height = 16; Interleave = 1; Name = "142_0@output"; PlaneCount = 222; PlaneStride = 1024; RowStride = 64; Symbol = "142_0@output"; Type = Float16; Width = 16; } ); Name = net; } ); } : perfStatsMask=0} was not loaded by the client.
4
0
824
Nov ’24
Block Apple Intelligence
Hi everyone, Could someone confirm if it's currently possible, or if there are any plans, to restrict users from enabling Apple Intelligence altogether? I understand that we can block individual features using MDM, but I'm interested in knowing if we can prevent users from toggling Apple Intelligence on and off in System Settings entirely. Thanks! Kind Regards, Filipe Nogueira
0
3
554
Nov ’24
VNCoreMLRequest Callback Not Triggered in Modified Video Classification App
Hi everyone, I'm working on integrating object recognition from live video feeds into my existing app by following Apple's sample code. My original project captures video and records it successfully. However, after integrating the Vision-based object detection components (VNCoreMLRequest), no detections occur, and the callback for the request is never triggered. To debug this issue, I’ve added the following functionality: Set up AVCaptureVideoDataOutput for processing video frames. Created a VNCoreMLRequest using my Core ML model. The video recording functionality works as expected, but no object detection happens. I’d like to know: How to debug this further? Which key debug points or logs could help identify where the issue lies? Have I missed any key configurations? Below is a diff of the modifications I’ve made to my project for the new feature. Diff of Changes: (Attach the diff provided above) Specific Observations: The captureOutput method is invoked correctly, but there is no output or error from the Vision request callback. Print statements in my setup function setForVideoClassify() show that the setup executes without errors. Questions: Could this be due to issues with my Core ML model compatibility or configuration? Is the VNCoreMLRequest setup incorrect, or do I need to ensure specific image formats for processing? Platform: Xcode 16.1, iOS 18.1, Swift 5, SwiftUI, iPhone 11, Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64 Any guidance or advice is appreciated! Thanks in advance.
1
0
605
Nov ’24
Genmoji/Playground “Persons” list
Hey, has anyone figured out how the “Persons” list in Genmoji/Playground actually works? I’ve had a strange experience so far. When I first got access during Beta 2, the list randomly included about 10–15 people, even though my photo library contains many more recognizable faces. To try fixing this, I started naming faces in the Photos app, hoping they’d be added to the Genmoji/Playground list, but nothing changed. Then, after updating to Beta 3, it added just 2–3 of the people I had named. Encouraged, I spent about an hour naming all the faces in my library. But a few hours later, the list unexpectedly removed around 10 people, leaving me with fewer than I had initially. I’ve also read that leaving the phone locked and plugged into power should help sort people in the library, but that hasn’t worked for me yet. Anyone else experienced this or found a way to make it work? Thanks!
1
1
1.4k
Nov ’24
app intents not working with placeholders and without app name
I tried this: struct CarShortcutsProvider: AppShortcutsProvider { @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { AppShortcut( intent: LockCarIntent(), phrases: ["Lock my car with \(.applicationName)", "Lock my \(\.$car) with \(.applicationName)"], shortTitle: LocalizedStringResource("Lock Car"), systemImageName: "lock.fill" ) AppShortcut( intent: UnlockCarIntent(), phrases: ["Unlock my car with \(.applicationName)", "Unlock my \(\.$car) with \(.applicationName)"], shortTitle: LocalizedStringResource("Unlock Car"), systemImageName: "lock.open.fill" ) } } but Siri only understands "unlock my car ", not with the placeholder. Siri asks me then for the car, and it understands it, but not in one sentence. Is there something wrong with my code? Also I tried it without applicationName first, and then it didn't work at all with Siri. Is this a general limitation of app intents? I thought the goal was to reduce friction. If the user has to mention the app name all the time, it adds friction.
0
2
439
Nov ’24
Core ML Model Prediction in 120 FPS faster than 60 FPS
Hi, I found when continuously predicting with the same Core ML model in 120 FPS will be faster than in 60 FPS. I use Macbook Pro M2 and turn on ProMotion to run Core ML model prediction with a 120 FPS video, the average prediction time is 7.46ms as below: But when I turn off ProMotion, set 60 Hz refresh rate, and run Core ML model prediction with a 60 FPS video, the average prediction time is 10.91ms as below: What could be the technical explanation for these results? Is there any documentation or technical literature that addresses this behavior?
2
0
575
Nov ’24
Unable to Use M1 Mac Pro Max GPU for TensorFlow Model Training
Hi Everyone, I'm currently facing an issue where TensorFlow is unable to detect the GPU on my M1 Mac for model training. When I run the following code to check for available GPUs: import tensorflow as tf print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) Num GPUs Available: 0 I have already applied the steps mentioned in the developer apple document. https://developer.apple.com/metal/tensorflow-plugin/ System Information: Device: M1 Mac Pro Max Python Version: 3.12.2 TensorFlow Version: 2.17.0 OS: macOS Sequoia (15.1) Questions: Is there any additional configuration required to enable GPU support on M1 Macs? Are there specific TensorFlow versions that I should be using for better compatibility? Has anyone else faced this issue, and how did you resolve it?
2
1
910
Nov ’24
Issue with CreateML annotations.json file
Hi, I am trying to create a multi label image classifier model using CreateML (the one included in Xcode 16.1). However, my annoations.json file won't get accepted by the app. I get the following error: annotations.json file contains field "Index 0" that is not of type String Here is a JSON example which results in said error: [ { "image": "image1.jpg", "annotations": [ { "label": "car-license-plate", "coordinates": { "x": 160, "y": 108, "width": 190, "height": 200 } } ] }, { "image": "image2.jpg", "annotations": [ { "label": "car-license-plate", "coordinates": { "x": 250, "y": 150, "width": 100, "height": 98 } } ] } ]
1
0
666
Nov ’24
Depth Anything V2 Core ML Model not working with Xcode 16.1
https://developer.apple.com/machine-learning/models/ Adding the DepthAnythingV2SmallF16.mlpackage to a new project in Xcode 16.1 and invoking the class crashes the app. Anyone else having the same issue? I tried Xcode 16.2 beta and it has the same response. Code import UIKit import CoreML class ViewController : UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. do { // Use a default model configuration. let defaultConfig = MLModelConfiguration() // app crashes here let model = try? DepthAnythingV2SmallF16( configuration: defaultConfig ) } catch { // } } } Response /AppleInternal/Library/BuildRoots/4b66fb3c-7dd0-11ef-b4fb-4a83e32a47e1/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:129: failed assertion Error: unhandled platform for MPSGraph serialization' `
1
0
811
Nov ’24
Help with TensorFlow to CoreML Conversion: AttributeError: 'float' object has no attribute 'astype'
Hello, I’m attempting to convert a TensorFlow model to CoreML using the coremltools package, but I’m encountering an error during the conversion process. The error traceback points to an issue within the Cast operation in the MIL (Model Intermediate Layer) when it tries to perform type inference: AttributeError: 'float' object has no attribute 'astype' Here is the relevant part of the error traceback: File ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/elementwise_unary.py", line 896, in get_cast_value return input_var.val.astype(dtype=type_map[dtype_val]) I’ve tried converting a model from the yamnet-tensorflow2 repository, and this error occurs when CoreML tries to cast a float type during the conversion of certain operations. I’m currently using Python 3.10 and coremltools version 6.0.1, with TensorFlow 2.x. Has anyone encountered a similar issue or can offer suggestions on how to resolve this? I’ve also considered that this might be related to mismatches in the model’s data types, but I’m not sure how to proceed. Platform and package versions: coremltools 6.1 tensorflow 2.10.0 tensorflow-estimator 2.10.0 tensorflow-hub 0.16.1 tensorflow-io-gcs-filesystem 0.37.1 Python 3.10.12 pip 24.3.1 from ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/pip (python 3.10) Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64 Any help or pointers would be greatly appreciated!
2
0
1.1k
Nov ’24