Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Keras 3 and Tensorflow GPU does not have support on apple silicon
hi, I am currently running LSTM on TensorFlow. However, when i switched from keras2 to keras3. code running time has increased 10 times -- it seems there is no GPU acceleration. Here is my code: batch size = 256 optimiser = adam activation = tanh _______________________________________________ Layer (type) Output Shape Param # ============================================= input_1 (InputLayer) [(None, 7, 16)] 0 bidirectional (Bidirection (None, 7, 320) 226560 al) bidirectional_1 (Bidirecti (None, 7, 512) 1181696 onal) bidirectional_2 (Bidirecti (None, 256) 656384 onal) dense (Dense) (None, 1) 257 ============================================== Total params: 2064897 (7.88 MB) Trainable params: 2064897 (7.88 MB) Non-trainable params: 0 (0.00 Byte) ______________________________________________ This is keras 3.6.0 + tensorflow 2.17.0 + tensorflow-metal 1.1.0 training status: Training------------ Epoch 1/200 28/681 ━━━━━━━━━━━━━━━━━━━━ 8:13 756ms/step - loss: 0.5901 - mape: 338.6876 - mse: 0.8591 This is keras 2.14.0 + tensorflow 2.14.0 + tensorflow-metal 1.1.0 training status: Training------------ Epoch 1/200 681/681 [==============================] - 37s 49ms/step - loss: 3.6345 - mape: 499038.7500 - mse: 34.4148 - val_loss: 3.5452 - val_mape: 41.7964 - val_mse: 32.0133 - lr: 0.0010 Is that because keras3 has no GPU support on macos? Apart from that, if I change LSTM activation from tanh to sigmoid in keras2, it does not have GPU support as well. My system is 15.0.1 and the code was running on python3.11 I am not sure why these happen. Thanks
2
0
1.5k
Oct ’24
New Vison api - CoreML - "The VNDetectorProcessOption_ScenePrints required option was not found"
I'm trying to run a coreML model. This is an image classifier generated using: let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource), maxIterations: 25, augmentation: [], algorithm: .transferLearning( featureExtractor: .scenePrint(revision: 2), classifier: .logisticRegressor )) let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters) I'm trying to run it with the new async Vision api let model = try MLModel(contentsOf: modelUrl) guard let modelContainer = try? CoreMLModelContainer(model: model) else { fatalError("The model is missing") } let request = CoreMLRequest(model: modelContainer) let image = NSImage(named:"testImage")! let cgImage = image.toCGImage()! let handler = ImageRequestHandler(cgImage) do { let results = try await handler.perform(request) print(results) } catch { print("Failed: \(error)") } This gives me Failed: internalError("Error Domain=com.apple.Vision Code=7 "The VNDetectorProcessOption_ScenePrints required option was not found" UserInfo={NSLocalizedDescription=The VNDetectorProcessOption_ScenePrints required option was not found}") Please help! Am I missing something?
2
0
501
Oct ’24
VNDetectTextRectanglesRequest not detecting text rectangles (includes image)
Hi everyone, I'm trying to use VNDetectTextRectanglesRequest to detect text rectangles in an image. Here's my current code: guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return } let textDetectionRequest = VNDetectTextRectanglesRequest { request, error in if let error = error { print("Text detection error: \(error)") return } guard let observations = request.results as? [VNTextObservation] else { print("No text rectangles detected.") return } print("Detected \(observations.count) text rectangles.") for observation in observations { print(observation.boundingBox) } } textDetectionRequest.revision = VNDetectTextRectanglesRequestRevision1 textDetectionRequest.reportCharacterBoxes = true let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:]) do { try handler.perform([textDetectionRequest]) } catch { print("Vision request error: \(error)") } The request completes without error, but no text rectangles are detected — the observations array is empty (count = 0). Here's a sample image I'm testing with: I expected VNTextObservation results, but I'm not getting any. Is there something I'm missing in how this API works? Or could it be a limitation of this request or revision? Thanks for any help!
2
0
114
May ’25
Problems creating a PipelineRegressor from a PyTorch converted model
I am trying to create a Pipeline with 3 sub-models: a Feature Vectorizer -> a NN regressor converted from PyTorch -> a Feature Extractor (to convert the output tensor to a Double value). The pipeline works fine when I use just a Vectorizer and an Extractor, this is the code: vectorizer = models.feature_vectorizer.create_feature_vectorizer( input_features=["windSpeed", "theoreticalPowerCurve", "windDirection"], # Multiple input features output_feature_name="input" ) preProc_spec = vectorizer[0] ct.utils.convert_double_to_float_multiarray_type(preProc_spec) extractor = models.array_feature_extractor.create_array_feature_extractor( input_features=[("input",datatypes.Array(3,))], # Multiple input features output_name="output", extract_indices = 1 ) ct.utils.convert_double_to_float_multiarray_type(extractor) pipeline_network = pipeline.PipelineRegressor ( input_features = ["windSpeed", "theoreticalPowerCurve", "windDirection"], output_features=["output"] ) pipeline_network.add_model(preProc_spec) pipeline_network.add_model(extractor) ct.utils.convert_double_to_float_multiarray_type(pipeline_network.spec) ct.utils.save_spec(pipeline_network.spec,"Final.mlpackage") This model works ok. I created a regression NN using PyTorch and converted to Core ML either import torch import torch.nn as nn class TurbinePowerModel(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(3, 4) self.activation1 = nn.ReLU() #self.linear2 = nn.Linear(5, 4) #self.activation2 = nn.ReLU() self.output = nn.Linear(4, 1) def forward(self, x): #x = F.normalize(x, dim = 0) x = self.linear1(x) x = self.activation1(x) # x = self.linear2(x) # x = self.activation2(x) x = self.output(x) return x def forward_inference(self, windSpeed,theoreticalPowerCurve,windDirection): input_tensor = torch.tensor([windSpeed, theoreticalPowerCurve, windDirection], dtype=torch.float32) return self.forward(input_tensor) model = torch.load('TurbinePowerRegression-1layer.pt', weights_only=False) import coremltools as ct print(ct.__version__) import pandas as pd from sklearn.preprocessing import StandardScaler df = pd.read_csv('T1_clean.csv',delimiter=';') X = df[['WindSpeed','TheoreticalPowerCurve','WindDirection']] y = df[['ActivePower']] scaler = StandardScaler() X = scaler.fit_transform(X) y = scaler.fit_transform(y) X_tensor = torch.tensor(X, dtype=torch.float32) y_tensor = torch.tensor(y, dtype=torch.float32) traced_model = torch.jit.trace(model, X_tensor[0]) mlmodel = ct.convert( traced_model, inputs=[ct.TensorType(name="input", shape=X_tensor[0].shape)], classifier_config=None # Optional, for classification tasks ) mlmodel.save("TurbineBase.mlpackage") This model has a Multiarray(Float 32 3) as input and a Multiarray(Float32 1) as output. When I try to include it in the middle of the pipeline (Adjusting the output and input types of the other models accordingly), the process runs ok, but I have the following error when opening the generated model on Xcode: What's is missing on the models. How can I set or adjust this metadata properly? Thanks!!!
1
0
586
Dec ’24
Threading issues when using debugger
Hi, I am modifying the sample camera app that is here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... In the processPreviewImages, I am using the Vision APIs to generate a segmentation mask for a person/object, then compositing that person onto a different background (with some other filtering). The filtering and compositing is done via CoreImage. At the end, I convert the CIImage to a CGImage then to a SwiftUI Image. When I run it on my iPhone, it works fine, and has not crashed. When I run it on the iPhone with the debugger, it crashes within a few seconds with: EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>: It had previously been working fine with the debugger, so I'm not sure what has changed. Is there a difference in how the Vision APIs are executed if the debugger is attached vs. not?
0
0
58
Apr ’25
iOS 18: Siri not passing string parameters to AppIntents if the string is a question
Xcode Version 16.0 (16A242d) iOS18 - Swift There seems to be a behavior change on iOS18 when using AppShortcuts and AppIntents to pass string parameters. After Siri prompts for a string property requestValueDialog, if the user makes a statement the string is passed. If the user's statement is a question, however, the string is not sent to the AppIntent and instead Siri attempts to answer that question. Example Code: struct MyAppNameShortcuts: AppShortcutsProvider { @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { AppShortcut( intent: AskQuestionIntent(), phrases: [ "Ask \(.applicationName) a question", ] ) } } struct AskQuestionIntent: AppIntent { static var title: LocalizedStringResource = .init(stringLiteral: "Ask a question") static var openAppWhenRun: Bool = false static var parameterSummary: some ParameterSummary { Summary("Search for \(\.$query)") } @Dependency private var apiClient: MockApiClient @Parameter(title: "Query", requestValueDialog: .init(stringLiteral: "What would you like to ask?")) var query: String // perform is not called if user asks a question such as "What color is the moon?" in response to requestValueDialog // iOS 17, the same string is passed though @MainActor func perform() async throws -> some IntentResult & ProvidesDialog & ShowsSnippetView { print("Query is: \(query)") let queryResult = try await apiClient.askQuery(queryString: query) let dialog = IntentDialog( full: .init(stringLiteral: queryResult.answer), supporting: .init(stringLiteral: "The answer to \(queryResult.question) is...") ) let view = SiriAnswerView(queryResult: queryResult) return .result(dialog: dialog, view: view) } } Given the above mock code: iOS17: Hey Siri Ask (AppName) a question Siri responds "What would you like to ask?" Say "What color is the moon?" String of "What color is the moon?" is passed to the AppIntent iOS18: Hey Siri Ask (AppName) a question Siri responds "What would you like to ask?" Say "What color is the moon?" Siri answers the question "What color is the moon?" Follow above steps again and instead reply "Moon" "Moon" is passed to AppIntent Basically any interrogative string parameters seem to be intercepted and sent to Siri proper rather than the provided AppIntent in iOS 18
1
0
871
Oct ’24
CoreML Model Conversion Help
I’m trying to follow Apple’s “WWDC24: Bring your machine learning and AI models to Apple Silicon” session to convert the Mistral-7B-Instruct-v0.2 model into a Core ML package, but I’ve run into a roadblock that I can’t seem to overcome. I’ve uploaded my full conversion script here for reference: https://pastebin.com/T7Zchzfc When I run the script, it progresses through tracing and MIL conversion but then fails at the backend_mlprogram stage with this error: https://pastebin.com/fUdEzzKM The core of the error is: ValueError: Op "keyCache_tmp" (op_type: identity) Input x="keyCache" expects list, tensor, or scalar but got state[tensor[1,32,8,2048,128,fp16]] I’ve registered my KV-cache buffers in a StatefulMistralWrapper subclass of nn.Module, matching the keyCache and valueCache state names in my ct.StateType definitions, but Core ML’s backend pass reports the state tensor as an invalid input. I’m using Core ML Tools 8.3.0 on Python 3.9.6, targeting iOS18, and forcing CPU conversion (MPS wasn’t available). Any pointers on how to satisfy the handle_unused_inputs pass or properly declare/cache state for GQA models in Core ML would be greatly appreciated! Thanks in advance for your help, Usman Khan
0
0
144
May ’25
switching region from China to US, Apple Intelligence still unable to use
Originally when Apple Intelligence launched there are some T&C for using and activate this Apple Intelligence. For activating Apple Intelligence at China first of all purchased iPhone must be non-Chinese iPhone means that the iPhone aren’t purchased in China not include Hong Kong and Macau. Also if use this Chinese Apple account are also unable to activate the Apple Intelligence. To activate the Apple Intelligence, I have travel to Hong Kong and purchase iPhone 16 pro Max. To reach the requirement of activating Apple Intelligence, I’ve decided to switch my region from China to United States. I’ve started to switch my account from October 19, CST 2:00am, Shanghai time 3:00pm. Till now CST time 8:30am October 24, Shanghai Time 9:30pm I still can’t join the Apple Intelligence waitlist. I’m also upgraded my phone to iOS 18.2. I’ve contacted the Apple support using my Chinese phone number and it transferred to me Philippines Apple support team. It seems like the Philippine Apple support team doesn’t help me get anything. The Philippine Apple support team keeps saying that the beta version iOS right now in my phone has some problem on it. But when I log out this Apple ID, and changed to another Apple ID that is UK ones. I can successfully enable the Apple Intelligence. What does this say?! This shows that my Apple account has a problem on it. It doesn’t switch successfully to United States server! And the Philippine Apple support team keep asking me to restore my iPhone like crazy! I’ve told them that I have used several Apple account that is from United States and United Kingdom that can successfully enable the Apple Intelligence. But the Philippine Apple support team said that my Apple account doesn’t have any problem! Apple please solve the problem! Anyone who have facing this kind of problem please share to us!!! Cheers!
1
0
3.3k
Oct ’24
My app crash in the Portrait private framework
Incident Identifier: 4C22F586-71FB-4644-B823-A4B52D158057 CrashReporter Key: adc89b7506c09c2a6b3a9099cc85531bdaba9156 Hardware Model: Mac16,10 Process: PRISMLensCore [16561] Path: /Applications/PRISMLens.app/Contents/Resources/app.asar.unpacked/node_modules/core-node/PRISMLensCore.app/PRISMLensCore Identifier: com.prismlive.camstudio Version: (null) ((null)) Code Type: ARM-64 Parent Process: ? [16560] Date/Time: (null) OS Version: macOS 15.4 (24E5228e) Report Version: 104 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x00000000 at 0x0000000000000000 Crashed Thread: 34 Application Specific Information: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil' Thread 34 Crashed: 0 CoreFoundation 0x000000018ba4dde4 0x18b960000 + 974308 (__exceptionPreprocess + 164) 1 libobjc.A.dylib 0x000000018b512b60 0x18b4f8000 + 109408 (objc_exception_throw + 88) 2 CoreFoundation 0x000000018b97e69c 0x18b960000 + 124572 (-[__NSArrayM insertObject:atIndex:] + 1276) 3 Portrait 0x0000000257e16a94 0x257da3000 + 473748 (-[PTMSRResize addAdditionalOutput:] + 604) 4 Portrait 0x0000000257de91c0 0x257da3000 + 287168 (-[PTEffectRenderer initWithDescriptor:metalContext:useHighResNetwork:faceAttributesNetwork:humanDetections:prevTemporalState:asyncInitQueue:sharedResources:] + 6204) 5 Portrait 0x0000000257dab21c 0x257da3000 + 33308 (__33-[PTEffect updateEffectDelegate:]_block_invoke.241 + 164) 6 libdispatch.dylib 0x000000018b739b2c 0x18b738000 + 6956 (_dispatch_call_block_and_release + 32) 7 libdispatch.dylib 0x000000018b75385c 0x18b738000 + 112732 (_dispatch_client_callout + 16) 8 libdispatch.dylib 0x000000018b742350 0x18b738000 + 41808 (_dispatch_lane_serial_drain + 740) 9 libdispatch.dylib 0x000000018b742e2c 0x18b738000 + 44588 (_dispatch_lane_invoke + 388) 10 libdispatch.dylib 0x000000018b74d264 0x18b738000 + 86628 (_dispatch_root_queue_drain_deferred_wlh + 292) 11 libdispatch.dylib 0x000000018b74cae8 0x18b738000 + 84712 (_dispatch_workloop_worker_thread + 540) 12 libsystem_pthread.dylib 0x000000018b8ede64 0x18b8eb000 + 11876 (_pthread_wqthread + 292) 13 libsystem_pthread.dylib 0x000000018b8ecb74 0x18b8eb000 + 7028 (start_wqthread + 8)
1
0
59
Mar ’25
Apple intelligence
So recently i updated from 18.1 to 18.2 beta on 15 pro max. I lost my access to apple intelligence. I was excited to see the use the image playground in 18.2 update but it need apple intelligence to use that app but it went back to older siri version. what are tye solutions now because i do not have any access to apple intelligence even though i have ios 18.2?
0
0
331
Nov ’24
linear_quantize_activations taking 90 minutes + on MacBook Air M1 2020
In my quantization code, the line: compressed_model_a8 = cto.coreml.experimental.linear_quantize_activations( model, activation_config, [{'img':np.random.randn(1,13,1024,1024)}] ) has taken 90 minutes to run so far and is still not completed. From debugging, I can see that the line it's stuck on is line 261 in _model_debugger.py: model = ct.models.MLModel( cloned_spec, weights_dir=self.weights_dir, compute_units=compute_units, skip_model_load=False, # Don't skip model load as we need model prediction to get activations range. ) Is this expected behaviour? Would it be quicker to run on another computer with more RAM?
1
0
58
Mar ’25
Resize Image Playground sheet
When using the imagePlaygroundSheet modifier in SwiftUI, the system presets an image playground in a fixed size. Especially on macOS, this modal is rather small and doesn't utilize the available space efficiently. Is there a way to make the modal bigger, or allow the user to resize the dialog? I tried presentationDetents, but this would need to be applied to the content of the sheet, which is provided by the system... I guess this question applies to other system-provided sheets like the photo picker as well.
2
0
705
Jan ’25
Max 16k images for Image Classifier training????
I'm hitting a limit when trying to train an Image Classifier. It's at about 16k images (in line with the error info) - and it gives the error: IOSurface creation failed: e00002be parentID: 00000000 properties: { IOSurfaceAllocSize = 529984; IOSurfaceBytesPerElement = 4; IOSurfaceBytesPerRow = 1472; IOSurfaceElementHeight = 1; IOSurfaceElementWidth = 1; IOSurfaceHeight = 360; IOSurfaceName = CoreVideo; IOSurfaceOffset = 0; IOSurfacePixelFormat = 1111970369; IOSurfacePlaneComponentBitDepths = ( 8, 8, 8, 8 ); IOSurfacePlaneComponentNames = ( 4, 3, 2, 1 ); IOSurfacePlaneComponentRanges = ( 1, 1, 1, 1 ); IOSurfacePurgeWhenNotInUse = 1; IOSurfaceSubsampling = 1; IOSurfaceWidth = 360; } (likely per client IOSurface limit of 16384 reached) I feel like I was able to use more images than this before upgrading to Sonoma - but I don't have the receipts.... Is there a way around this? I have oodles of spare memory on my machine - it's using about 16gb of 64 when it crashes... code to create the model is let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource), maxIterations: 25, augmentation: [], algorithm: .transferLearning( featureExtractor: .scenePrint(revision: 2), classifier: .logisticRegressor )) let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters) I have also tried the same training source in CreateML, it runs through 'extracting features', and crashes at about 16k images processed. Thank you
1
0
482
Oct ’24
Apple intelligence
I recently updated my iPhone to iOS 14.2 and the playground app came and then I couldn't open it because I hadn't released it yet. Today it released it and I rediled the notification but I went to look for it to test it and I couldn't find it anymore. How do I make the PLAYGROUND app come back from Apple smart. Como faço para baixar novamente o app?
0
0
340
Nov ’24