Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Does the Random Number Generation (RGN) process change over different OS versions?
Hi everyone! I appreciate your help. I am a researcher and I use UMAP to cluster my data. Reproducibility is a key requirement for my field, so I set a random seed for reproducibility. After coming back to my project after some time, I do not get the same results than previously even though I am working in a virtual environment, which I did not change. When pondering about the reasons, I remembered that I upgraded my OS from Sonoma 14.1.1 to 14.5, so I was wondering whether the change in OS might cause those issues. I'm sorry if this question is obvious to developer folks, but before I downgrade my OS or create a virtual machine, any tipp is much appreciated. Thank you!
2
0
443
Oct ’24
Converted Model Preview Issues in Xcode
Hello! I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags (as pictured) instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly. I have some code that may be relevant in debugging this issue: How I use the model: model = BallTrackerNet() # this is the model architecture which will be referenced later device = self.device # cpu model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights model = model.to(device) model.eval() Here is the BallTrackerNet() model itself: import torch.nn as nn import torch class ConvBlock(nn.Module): def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True): super().__init__() self.block = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias), nn.ReLU(), nn.BatchNorm2d(out_channels) ) def forward(self, x): return self.block(x) class BallTrackerNet(nn.Module): def __init__(self, out_channels=256): super().__init__() self.out_channels = out_channels self.conv1 = ConvBlock(in_channels=9, out_channels=64) self.conv2 = ConvBlock(in_channels=64, out_channels=64) self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = ConvBlock(in_channels=64, out_channels=128) self.conv4 = ConvBlock(in_channels=128, out_channels=128) self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv5 = ConvBlock(in_channels=128, out_channels=256) self.conv6 = ConvBlock(in_channels=256, out_channels=256) self.conv7 = ConvBlock(in_channels=256, out_channels=256) self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv8 = ConvBlock(in_channels=256, out_channels=512) self.conv9 = ConvBlock(in_channels=512, out_channels=512) self.conv10 = ConvBlock(in_channels=512, out_channels=512) self.ups1 = nn.Upsample(scale_factor=2) self.conv11 = ConvBlock(in_channels=512, out_channels=256) self.conv12 = ConvBlock(in_channels=256, out_channels=256) self.conv13 = ConvBlock(in_channels=256, out_channels=256) self.ups2 = nn.Upsample(scale_factor=2) self.conv14 = ConvBlock(in_channels=256, out_channels=128) self.conv15 = ConvBlock(in_channels=128, out_channels=128) self.ups3 = nn.Upsample(scale_factor=2) self.conv16 = ConvBlock(in_channels=128, out_channels=64) self.conv17 = ConvBlock(in_channels=64, out_channels=64) self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels) self.softmax = nn.Softmax(dim=1) self._init_weights() def forward(self, x, testing=False): batch_size = x.size(0) x = self.conv1(x) x = self.conv2(x) x = self.pool1(x) x = self.conv3(x) x = self.conv4(x) x = self.pool2(x) x = self.conv5(x) x = self.conv6(x) x = self.conv7(x) x = self.pool3(x) x = self.conv8(x) x = self.conv9(x) x = self.conv10(x) x = self.ups1(x) x = self.conv11(x) x = self.conv12(x) x = self.conv13(x) x = self.ups2(x) x = self.conv14(x) x = self.conv15(x) x = self.ups3(x) x = self.conv16(x) x = self.conv17(x) x = self.conv18(x) # x = self.softmax(x) out = x.reshape(batch_size, self.out_channels, -1) if testing: out = self.softmax(out) return out def _init_weights(self): for module in self.modules(): if isinstance(module, nn.Conv2d): nn.init.uniform_(module.weight, -0.05, 0.05) if module.bias is not None: nn.init.constant_(module.bias, 0) elif isinstance(module, nn.BatchNorm2d): nn.init.constant_(module.weight, 1) nn.init.constant_(module.bias, 0) Here is also the meta data of my model: [ { "metadataOutputVersion" : "3.0", "storagePrecision" : "Float16", "outputSchema" : [ { "hasShapeFlexibility" : "0", "isOptional" : "0", "dataType" : "Float32", "formattedType" : "MultiArray (Float32 1 × 256 × 230400)", "shortDescription" : "", "shape" : "[1, 256, 230400]", "name" : "var_462", "type" : "MultiArray" } ], "modelParameters" : [ ], "specificationVersion" : 6, "mlProgramOperationTypeHistogram" : { "Cast" : 2, "Conv" : 18, "Relu" : 18, "BatchNorm" : 18, "Reshape" : 1, "UpsampleNearestNeighbor" : 3, "MaxPool" : 3 }, "computePrecision" : "Mixed (Float16, Float32, Int32)", "isUpdatable" : "0", "availability" : { "macOS" : "12.0", "tvOS" : "15.0", "visionOS" : "1.0", "watchOS" : "8.0", "iOS" : "15.0", "macCatalyst" : "15.0" }, "modelType" : { "name" : "MLModelType_mlProgram" }, "userDefinedMetadata" : { "com.github.apple.coremltools.source_dialect" : "TorchScript", "com.github.apple.coremltools.source" : "torch==2.5.1", "com.github.apple.coremltools.version" : "8.1" }, "inputSchema" : [ { "hasShapeFlexibility" : "0", "isOptional" : "0", "dataType" : "Float32", "formattedType" : "MultiArray (Float32 1 × 9 × 360 × 640)", "shortDescription" : "", "shape" : "[1, 9, 360, 640]", "name" : "input_frames", "type" : "MultiArray" } ], "generatedClassName" : "BallTracker", "method" : "predict" } ] I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated! Let me know if any other information would be helpful to see as well. Thanks! Michael
1
0
615
Jan ’25
Issue with iOS 18.2 Beta - "Ask" Button Not Working in Visual Intelligence
Hey everyone, I recently installed the iOS 18.2 developer beta and connected my ChatGPT Premium account to use it within the Visual Intelligence feature. After taking a photo, I tried pressing the "Ask" button, but it’s completely unresponsive. I've tried troubleshooting by doing a hard reset and a regular restart, but no luck so far. Has anyone else encountered this issue? If so, do you know of any workaround or fix that might help? Thanks in advance!
2
0
1.9k
Oct ’24
Issue with CreateML annotations.json file
Hi, I am trying to create a multi label image classifier model using CreateML (the one included in Xcode 16.1). However, my annoations.json file won't get accepted by the app. I get the following error: annotations.json file contains field "Index 0" that is not of type String Here is a JSON example which results in said error: [ { "image": "image1.jpg", "annotations": [ { "label": "car-license-plate", "coordinates": { "x": 160, "y": 108, "width": 190, "height": 200 } } ] }, { "image": "image2.jpg", "annotations": [ { "label": "car-license-plate", "coordinates": { "x": 250, "y": 150, "width": 100, "height": 98 } } ] } ]
1
0
660
Nov ’24
CoreML Conversion Display Issues
Hello! I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly. I have some code that may be relevant in debugging this issue: How I use the model: model = BallTrackerNet() # this is the model architecture which will be referenced later device = self.device # cpu model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights model = model.to(device) model.eval() Here is the BallTrackerNet() model itself import torch.nn as nn import torch class ConvBlock(nn.Module): def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True): super().__init__() self.block = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias), nn.ReLU(), nn.BatchNorm2d(out_channels) ) def forward(self, x): return self.block(x) class BallTrackerNet(nn.Module): def __init__(self, out_channels=256): super().__init__() self.out_channels = out_channels self.conv1 = ConvBlock(in_channels=9, out_channels=64) self.conv2 = ConvBlock(in_channels=64, out_channels=64) self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = ConvBlock(in_channels=64, out_channels=128) self.conv4 = ConvBlock(in_channels=128, out_channels=128) self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv5 = ConvBlock(in_channels=128, out_channels=256) self.conv6 = ConvBlock(in_channels=256, out_channels=256) self.conv7 = ConvBlock(in_channels=256, out_channels=256) self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv8 = ConvBlock(in_channels=256, out_channels=512) self.conv9 = ConvBlock(in_channels=512, out_channels=512) self.conv10 = ConvBlock(in_channels=512, out_channels=512) self.ups1 = nn.Upsample(scale_factor=2) self.conv11 = ConvBlock(in_channels=512, out_channels=256) self.conv12 = ConvBlock(in_channels=256, out_channels=256) self.conv13 = ConvBlock(in_channels=256, out_channels=256) self.ups2 = nn.Upsample(scale_factor=2) self.conv14 = ConvBlock(in_channels=256, out_channels=128) self.conv15 = ConvBlock(in_channels=128, out_channels=128) self.ups3 = nn.Upsample(scale_factor=2) self.conv16 = ConvBlock(in_channels=128, out_channels=64) self.conv17 = ConvBlock(in_channels=64, out_channels=64) self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels) self.softmax = nn.Softmax(dim=1) self._init_weights() def forward(self, x, testing=False): batch_size = x.size(0) x = self.conv1(x) x = self.conv2(x) x = self.pool1(x) x = self.conv3(x) x = self.conv4(x) x = self.pool2(x) x = self.conv5(x) x = self.conv6(x) x = self.conv7(x) x = self.pool3(x) x = self.conv8(x) x = self.conv9(x) x = self.conv10(x) x = self.ups1(x) x = self.conv11(x) x = self.conv12(x) x = self.conv13(x) x = self.ups2(x) x = self.conv14(x) x = self.conv15(x) x = self.ups3(x) x = self.conv16(x) x = self.conv17(x) x = self.conv18(x) # x = self.softmax(x) out = x.reshape(batch_size, self.out_channels, -1) if testing: out = self.softmax(out) return out def _init_weights(self): for module in self.modules(): if isinstance(module, nn.Conv2d): nn.init.uniform_(module.weight, -0.05, 0.05) if module.bias is not None: nn.init.constant_(module.bias, 0) elif isinstance(module, nn.BatchNorm2d): nn.init.constant_(module.weight, 1) nn.init.constant_(module.bias, 0) I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated! Thanks! Michael
13
0
1k
Jan ’25
Help with TensorFlow to CoreML Conversion: AttributeError: 'float' object has no attribute 'astype'
Hello, I’m attempting to convert a TensorFlow model to CoreML using the coremltools package, but I’m encountering an error during the conversion process. The error traceback points to an issue within the Cast operation in the MIL (Model Intermediate Layer) when it tries to perform type inference: AttributeError: 'float' object has no attribute 'astype' Here is the relevant part of the error traceback: File ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/elementwise_unary.py", line 896, in get_cast_value return input_var.val.astype(dtype=type_map[dtype_val]) I’ve tried converting a model from the yamnet-tensorflow2 repository, and this error occurs when CoreML tries to cast a float type during the conversion of certain operations. I’m currently using Python 3.10 and coremltools version 6.0.1, with TensorFlow 2.x. Has anyone encountered a similar issue or can offer suggestions on how to resolve this? I’ve also considered that this might be related to mismatches in the model’s data types, but I’m not sure how to proceed. Platform and package versions: coremltools 6.1 tensorflow 2.10.0 tensorflow-estimator 2.10.0 tensorflow-hub 0.16.1 tensorflow-io-gcs-filesystem 0.37.1 Python 3.10.12 pip 24.3.1 from ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/pip (python 3.10) Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64 Any help or pointers would be greatly appreciated!
2
0
1.1k
Nov ’24
Create ML app seems to stop testing without error
I have a smallish image classifier I've been working on using the Create ML app. For a while everything was going fine, but lately, as the dataset has gotten larger, Create ML seems to stop during the testing phase with no error or test results. You can see here that there is no score in the result box, even though there are testing started and completed messages: No error message is shown in the Create ML app, but I do see these messages in the log: default 14:25:36.529887-0500 MLRecipeExecutionService [0x6000012bc000] activating connection: mach=false listener=false peer=false name=com.apple.coremedia.videodecoder default 14:25:36.529978-0500 MLRecipeExecutionService [0x41c5d34c0] activating connection: mach=false listener=true peer=false name=(anonymous) default 14:25:36.530004-0500 MLRecipeExecutionService [0x41c5d34c0] Channel could not return listener port. default 14:25:36.530364-0500 MLRecipeExecutionService [0x429a88740] activating connection: mach=false listener=false peer=true name=com.apple.xpc.anonymous.0x41c5d34c0.peer[1167].0x429a88740 default 14:25:36.534523-0500 MLRecipeExecutionService [0x6000012bc000] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() default 14:25:36.534537-0500 MLRecipeExecutionService [0x41c5d34c0] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() default 14:25:36.534544-0500 MLRecipeExecutionService [0x429a88740] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() error 14:25:36.558788-0500 MLRecipeExecutionService CreateWithURL:342: *** ERROR: err=24 (Too many open files) - could not open '<CFURL 0x60000079b540 [0x1fdd32240]>{string = file:///Users/kevin/Library/Mobile%20Documents/com~apple~CloudDocs/Binary%20Formations/Under%20My%20Roof/Core%20ML%20Training%20Data/Household%20Items/Output/2025.01.23_12.55.16/Test/Stove/Test480.webp, encoding = 134217984, base = (null)}' default 14:25:36.559030-0500 MLRecipeExecutionService Error: <private> default 14:25:36.559077-0500 MLRecipeExecutionService Error: <private> Of particular interest is the "Too many open files" message from MLRecipeExecutionService referencing one of the test images. There are a total of 2,555 test images, which I wouldn't think would be a very large set. The system doesn't seem to be running out of memory or anything like that. Near the end of the test run there MLRecipeExecution service had 2934 file descriptors open according to lsof. Has anyone else run into this or know of a workaround? So far I've tried rebooting and recreating the Create ML project. Currently using Create ML Version 6.1 (150.3) on macOS 15.2 (24C101) running on a Mac Studio.
1
0
453
Jan ’25
Running out of memory analyzing images with ImageRequestHandler
Hi, I'm trying to analyze images in my Photos library with the following code: func analyzeImages(_ inputIDs: [String]) { let manager = PHImageManager.default() let option = PHImageRequestOptions() option.isSynchronous = true option.isNetworkAccessAllowed = true option.resizeMode = .none option.deliveryMode = .highQualityFormat let concurrentTasks=1 let clock = ContinuousClock() let duration = clock.measure { let group = DispatchGroup() let sema = DispatchSemaphore(value: concurrentTasks) for entry in inputIDs { if let asset=PHAsset.fetchAssets(withLocalIdentifiers: [entry], options: nil).firstObject { print("analyzing asset: \(entry)") group.enter() sema.wait() manager.requestImage(for: asset, targetSize: PHImageManagerMaximumSize, contentMode: .aspectFit, options: option) { (result, info) in if let result = result { Task { print("retrieved asset: \(entry)") let aestheticsRequest = CalculateImageAestheticsScoresRequest() let fingerprintRequest = GenerateImageFeaturePrintRequest() let inputImage = result.cgImage! let handler = ImageRequestHandler(inputImage) let (aesthetics,fingerprint) = try await handler.perform(aestheticsRequest, fingerprintRequest) // save Results print("finished asset: \(entry)") sema.signal() group.leave() } } else { group.leave() } } } } group.wait() } print("analyzeImages: Duration \(duration)") } When running this code, only two requests are being processed simultaneously (due to to the semaphore)... However, if I call the function with a large list of images (>100), memory usage balloons over 1.6GB and the app crashes. If I call with a smaller number of images, the loop completes and the memory is freed. When I use instruments to look for memory leaks, it indicates no memory leaks are found, but there are 150+ VM:IOSurfaces allocated by CMPhoto, CoreVideo and CoreGraphics @ 35MB each. Shouldn't each surface be released when the task is complete?
2
0
583
Dec ’24
The "right" way to add parameters to Siri voice operations
In this thread, I asked about adding parameters to App Shortcuts. The conclusion that I've drawn so far is that for App Shortcuts, there cannot be any parameters in the prompt, otherwise the system cannot find the AppShortcutsProvider. While this is fine for Shortcuts and non-voice interaction, I'd like to find a way to add parameters to the prompt. Here is the scenario: My app controls a device that displays some content on "pages." The pages are defined in an AppEnum, which I use for Shortcuts integration via App Intents. The App Intent functions as expected, and is able to change the page based on the user selection within Shortcuts (or prompted if using the App Shortcut). What I'd like to do is allow the user to be able to say "Siri, open with ." So far, The closest I've come to understanding how this works is through the .intentsdefinition file you can create (and SiriKit in general), however the part that really confused me there is a button in the File Editor that says "Convert to App Intent." To me, this means that I should be able to use the app intent I've already authored and hook that into Siri, rather than making an entirely new function/code-block that does exactly the same thing. Ideally, that's what I want to do. What's the right way to define this behavior? p.s. If I had to pick an intent schema in the context of AssistantSchemas, I'd say it's closest to the "Open File" one, if that helps. I'd ultimately like to make the "pages" user-customizable so in the long run, that would be what I'd do.
2
0
1.3k
Jan ’25
Source Files from the Session number 424 WWDC2019
In the 2019 WWDC session Training Object Detection Models in Create ML a JSON file named: annotations_832_newdice_copy.json was show alongside with the images folder named: Dice Training Images Two Sets. Are these resources made available for devs ? I am looking to understand whether the 6000 annotations were needed to be done manually ? Meaning, they have annotated around 1000 images making 6 labels on each manually to achieve this source ? Video shows around 1000 images. Can someone please clarify.
2
0
633
Dec ’24
Inform iOS about AppShortcutsProvider
I've been following along with "App Shortcuts" development but cannot get Siri to run my Intent. The intent on its own works in Shortcuts, along with a couple others that aren't in the AppShortcutsProvder structure. I keep getting the following two errors, but cannot figure out why this is occurring with documentation or other forum posts. No ConnectionContext found for 12909953344 Attempted to fetch App Shortcuts, but couldn't find the AppShortcutsProvider. Here are the relevant snippets of code - (1) The AppIntent definition struct SetBrightnessIntent: AppIntent { static var title = LocalizedStringResource("Set Brightness") static var description = IntentDescription("Set Glass Display Brightness") @Parameter(title: "Level") var level: Int? static var parameterSummary: some ParameterSummary { Summary("Set Brightness to \(\.$level)%") } func perform() async throws -> some IntentResult { guard let level = level else { throw $level.needsValueError("Please provide a brightness value") } if level > 100 || level <= 0 { throw $level.needsValueError("Brightness must be between 1 and 100") } // do stuff with level return .result() } } (2) The AppShortcutsProvider (defined in my iOS app target, there are no other targets) struct MyAppShortcuts: AppShortcutsProvider { static var shortcutTileColor: ShortcutTileColor = .grayBlue @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { AppShortcut( intent: SetBrightnessIntent(), phrases: [ "set \(.applicationName) brightness to \(\.$level)", "set \(.applicationName) brightness to \(\.$level) percent" ], shortTitle: LocalizedStringResource("Set Glass Brightness"), systemImageName: "sun.max" ) } } Does anything here look wrong? Is there some magical key that I need to specify in Info.plist to get Siri to recognize the AppShortcutsProvider? On Xcode 16.2 and iOS 18.2 (non-beta).
5
0
1.2k
Jan ’25
unable to run tensorflow on my machine
Hello! I've been trying to run tensorflow on my MBA M3. I previously had an Intel Mac and was able to run tensorflow without any problem. I've been working on a personal project in a directory I made on my previous Mac, that I was running through Jupyter notebook. Now every time I try to run the code, the kernel will die and I'm unsure what to do. I tried following tutorials, but every tutorial I've seen has made me create a new environment to access Jupyter Notebook, but not letting me access notebooks and files that have already been created. I tried to run this following command in terminal and received the subsequent error back. python -m pip install tensorflow-metal ERROR: Could not find a version that satisfies the requirement tensorflow-metal (from versions: none) ERROR: No matching distribution found for tensorflow-metal I've installed miniforge, Xcode, and anaconda onto my computer already and wanted some assistance.
2
0
831
Dec ’24