Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Keras 3 and Tensorflow GPU does not have support on apple silicon
hi, I am currently running LSTM on TensorFlow. However, when i switched from keras2 to keras3. code running time has increased 10 times -- it seems there is no GPU acceleration. Here is my code: batch size = 256 optimiser = adam activation = tanh _______________________________________________ Layer (type) Output Shape Param # ============================================= input_1 (InputLayer) [(None, 7, 16)] 0 bidirectional (Bidirection (None, 7, 320) 226560 al) bidirectional_1 (Bidirecti (None, 7, 512) 1181696 onal) bidirectional_2 (Bidirecti (None, 256) 656384 onal) dense (Dense) (None, 1) 257 ============================================== Total params: 2064897 (7.88 MB) Trainable params: 2064897 (7.88 MB) Non-trainable params: 0 (0.00 Byte) ______________________________________________ This is keras 3.6.0 + tensorflow 2.17.0 + tensorflow-metal 1.1.0 training status: Training------------ Epoch 1/200 28/681 ━━━━━━━━━━━━━━━━━━━━ 8:13 756ms/step - loss: 0.5901 - mape: 338.6876 - mse: 0.8591 This is keras 2.14.0 + tensorflow 2.14.0 + tensorflow-metal 1.1.0 training status: Training------------ Epoch 1/200 681/681 [==============================] - 37s 49ms/step - loss: 3.6345 - mape: 499038.7500 - mse: 34.4148 - val_loss: 3.5452 - val_mape: 41.7964 - val_mse: 32.0133 - lr: 0.0010 Is that because keras3 has no GPU support on macos? Apart from that, if I change LSTM activation from tanh to sigmoid in keras2, it does not have GPU support as well. My system is 15.0.1 and the code was running on python3.11 I am not sure why these happen. Thanks
2
0
1.5k
Oct ’24
AI and ML
Hello. I am willing to hire game developer for cards game called baloot. My question is Can the developer implement an AI when the computer is playing and the computer on the same time the conputer improves his rises level without any interaction? 🌹
0
0
60
Jun ’25
Starting/restarting SFSpeechRecognizer?
Hello all, I'm working on a project that involves listening to a person speak off of a script and I want to stop then restart the recognitionTask between sections so I don't run afoul of keeping the recognitionTask running for longer than it needs to. Also, I'd like to be able to flush the current input between sections so the input from the previous section doesn't roll over into the next one. This is based on the sample code for SFSpeechRecognizer so there's a chance I might be misunderstanding something. private func restartRecording() { let inputNode = audioEngine.inputNode audioEngine.stop() inputNode.removeTap(onBus: 0) recognitionRequest?.endAudio() recordingStarted = false recognitionTask?.cancel() do { try startRecording() } catch { print("Oopsie.") } } Here's my code. When I run it, the recognition task doesn't restart. Any ideas?
0
0
559
Dec ’24
Train adapter with tool calling
Documentation on adapter train is lacking any details related to training on dataset with tool calling. And page about tool calling itself only explain how to use it from Swift without any internal details useful in training. Question is how schema should looks like for including tool calling in dataset?
1
0
211
Jun ’25
Keras on Mac (M4) is giving inconsistent results compared to running on NVIDIA GPUs
I have seen inconsistent results for my Colab machine learning notebooks running locally on a Mac M4, compared to running the same notebook code on either T4 (in Colab) or a RTX3090 locally. To illustrate the problems I have set up a notebook that implements two simple CNN models that solves the Fashion-MNIST problem. https://colab.research.google.com/drive/11BhtHhN079-BWqv9QvvcSD9U4mlVSocB?usp=sharing For the good model with 2M parameters I get the following results: T4 (Colab, JAX): Test accuracy: 0.925 3090 (Local PC via ssh tunnel, Jax): Test accuracy: 0.925 Mac M4 (Local, JAX): Test accuracy: 0.893 Mac M4 (Local, Tensorflow): Test accuracy: 0.893 That is, I see a significant drop in performance when I run on the Mac M4 compared to the NVIDIA machines, and it seems to be independent of backend. I however do not know how to pinpoint this to either Keras or Apple’s METAL implementation. I have reported this to Keras: https://colab.research.google.com/drive/11BhtHhN079-BWqv9QvvcSD9U4mlVSocB?usp=sharing but as this can be (likely is?) an Apple Metal issue, I wanted to report this here as well. On the mac I am running the following Python libraries: keras 3.9.1 tensorflow 2.19.0 tensorflow-metal 1.2.0 jax 0.5.3 jax-metal 0.1.1 jaxlib 0.5.3
0
0
79
Mar ’25
Writing tools options
Hi team, We have implemented a writing tool inside a WebView that allows users to type content in a textarea. When the "Show Writing Tools" button is clicked, an AI-powered editor opens. After clicking the "Rewrite" button, the AI modifies the text. However, when clicking the "Replace" button, the rewritten text does not update the original textarea. Kindly check and help me showButton.addTarget(self, action: #selector(showWritingTools(_:)), for: .touchUpInside) @available(iOS 18.2, *) optional func showWritingTools(_ sender: Any) Note: same cases working in TextView pfa
0
0
134
Mar ’25
Max 16k images for Image Classifier training????
I'm hitting a limit when trying to train an Image Classifier. It's at about 16k images (in line with the error info) - and it gives the error: IOSurface creation failed: e00002be parentID: 00000000 properties: { IOSurfaceAllocSize = 529984; IOSurfaceBytesPerElement = 4; IOSurfaceBytesPerRow = 1472; IOSurfaceElementHeight = 1; IOSurfaceElementWidth = 1; IOSurfaceHeight = 360; IOSurfaceName = CoreVideo; IOSurfaceOffset = 0; IOSurfacePixelFormat = 1111970369; IOSurfacePlaneComponentBitDepths = ( 8, 8, 8, 8 ); IOSurfacePlaneComponentNames = ( 4, 3, 2, 1 ); IOSurfacePlaneComponentRanges = ( 1, 1, 1, 1 ); IOSurfacePurgeWhenNotInUse = 1; IOSurfaceSubsampling = 1; IOSurfaceWidth = 360; } (likely per client IOSurface limit of 16384 reached) I feel like I was able to use more images than this before upgrading to Sonoma - but I don't have the receipts.... Is there a way around this? I have oodles of spare memory on my machine - it's using about 16gb of 64 when it crashes... code to create the model is let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource), maxIterations: 25, augmentation: [], algorithm: .transferLearning( featureExtractor: .scenePrint(revision: 2), classifier: .logisticRegressor )) let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters) I have also tried the same training source in CreateML, it runs through 'extracting features', and crashes at about 16k images processed. Thank you
1
0
482
Oct ’24
Getting ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset). I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully. But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax. Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND. Does anyone faced this issue? if I execute this builder.inspect_layers(last=4) I get this output [Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND) Updatable: False Input blobs: ['sequential/dense_1/MatMul'] Output blobs: ['Identity'] [Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul) Updatable: False Input blobs: ['sequential/dense/Relu'] Output blobs: ['sequential/dense_1/MatMul'] [Id: 30], Name: sequential/dense/Relu (Type: activation) Updatable: False Input blobs: ['sequential/dense/MatMul'] Output blobs: ['sequential/dense/Relu'] In the make_updatable function when I execute builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity') I get this error ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
2
0
1.4k
Nov ’24
AFM support for RAG Applications
I‘m excited at the development possibilities presented by Apple Intelligence and have begun imagining retrieval augmented generation use cases. Writing tools suggest that this is possible, but I have not seen any direct statements by Apple regarding use of AFMs for RAG applications. Have any references to APIs or sample code for RAG applications been published?
4
0
510
Oct ’24
missing CreateML frameworks
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4. please suggest advice on how do I proceed
4
0
658
Mar ’25
Creating .mlmodel with Create ML Components
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components . Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ? Code is from [https://developer.apple.com/videos/play/wwdc2022/10019) import CoreImage import CreateMLComponents struct ImageRegressor { static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas") static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters") static func train() async throws -> some Transformer<CIImage, Float> { let estimator = ImageFeaturePrint() .appending(LinearRegressor()) // File name example: banana-5.jpg let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image) .mapFeatures(ImageReader.read) .mapAnnotations({ Float($0)! }) let (training, validation) = data.randomSplit(by: 0.8) let transformer = try await estimator.fitted(to: training, validateOn: validation) try estimator.write(transformer, to: parametersURL) return transformer } } I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with "pipeline.json, parameters, optimizer.json, optimizer"
3
0
475
Mar ’25
Can't disable Writing Tools for SwiftUI TextField
I'm trying to disable Writing Tools for a specific TextField using .writingToolsBehavior(.disabled), but when running the app on my iPhone 16 Pro with Apple Intelligence enabled, I can still use Writing Tools on the text box. I also see no difference with .writingToolsBehavior(.limited). Is there something I'm doing wrong or is this a bug? Sample code below: import SwiftUI struct ContentView: View { @State var text = "" var body: some View { VStack { TextField("Enter Text", text: $text) .writingToolsBehavior(.disabled) } .padding() } } #Preview { ContentView() }
4
0
1.2k
Nov ’24
Vision framework OCR missing Swedish support?
WWDC 2024 mentioned that the OCR feature from the Vision framework has support for "Korean, Swedish, and Chinese", but the Swedish support does not seem to be available... Running either print(try? VNRecognizeTextRequest().supportedRecognitionLanguages()) or var ocrRequest = RecognizeTextRequest(.revision3) print(ocrRequest.supportedRecognitionLanguages) did not print out Swedish as one of the supported languages, but Korean and Chinese are. Tested on early versions of iOS 18 developer beta, and the latest version of iOS 18.1 (22B5054e).
1
0
652
Oct ’24