Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

NLtagger not filtering words such as "And, to, a, in"
what am I not understanding here. in short the view loads text from the jsons descriptions and then should filter out the words. and return and display a list of most used words, debugging shows words being identified by the code but does not filter them out private func loadWordCounts() { DispatchQueue.global(qos: .background).async { let fileManager = FileManager.default guard let documentsDirectory = try? fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) else { return } let descriptions = loadDescriptions(fileManager: fileManager, documentsDirectory: documentsDirectory) var counts = countWords(in: descriptions) let tagsToRemove: Set<NLTag> = [ .verb, .pronoun, .determiner, .particle, .preposition, .conjunction, .interjection, .classifier ] for (word, _) in counts { let tagger = NLTagger(tagSchemes: [.lexicalClass]) tagger.string = word let (tag, _) = tagger.tag(at: word.startIndex, unit: .word, scheme: .lexicalClass) if let unwrappedTag = tag, tagsToRemove.contains(unwrappedTag) { counts[word] = 0 } } DispatchQueue.main.async { self.wordCounts = counts } } }
0
0
548
Oct ’24
Issue with iOS 18.2 Beta - "Ask" Button Not Working in Visual Intelligence
Hey everyone, I recently installed the iOS 18.2 developer beta and connected my ChatGPT Premium account to use it within the Visual Intelligence feature. After taking a photo, I tried pressing the "Ask" button, but it’s completely unresponsive. I've tried troubleshooting by doing a hard reset and a regular restart, but no luck so far. Has anyone else encountered this issue? If so, do you know of any workaround or fix that might help? Thanks in advance!
2
0
1.9k
Oct ’24
Issues with Statsmodels
When I import starts models in Jupyter notebook, I ge the following error: ImportError: dlopen(/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib Referenced from: &lt;5ACBAA79-2387-3BEF-9F8E-6B7584B0F5AD&gt; /opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so Reason: tried: '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache). What should I do?
1
0
683
Oct ’24
Core ML Model Performance report errors when include GPU/Neural Engine in compute unit selection
Hi, while trying to diagnose why some of my Core ML models are running slower when their configuration is set with compute units .CPU_AND_GPU compared to running with .CPU_ONLY I've been attempting to create Core ML model performance reports in Xcode to identify the operations that are not compatible with the GPU. However, when selecting an iPhone as the connected device and compute unit of 'All', 'CPU and GPU' or 'CPU and Neural Engine' Xcode displays one of the following two error messages: “There was an error creating the performance report. The performance report has crashed on device” "There was an error creating the performance report. Unable to compute the prediction using ML Program. It can be an invalid input data or broken/unsupported model." The performance reports are successfully generated when selecting the connected device as iPhone with compute unit 'CPU only' or Mac with any combination of compute units. Some of the models I have found the issue to occur with are stateful, some are not. I have tried to replicate the issue with some example models from the CoreML tools stateful model guide/video Bring your machine learning and AI models to Apple silicon. Running the performance report on a model generated from the Simple Accumulator example code the performance report is created successfully when trying all compute unit options, but using models from the toy attention and toy attention with kvcache examples it is only successful with compute units as 'CPU only' when choosing iPhone as the device. Versions I'm currently working with: Xcode Version 16.0 MacOS Sequoia 15.0.1 Core ML Tools 8.0 iPhone 16 Pro iOS 18.0.1 Is there a way to avoid these errors? Or is there another way to identify which operations within a CoreML model are supported to run on iPhone GPU/Neural engine?
0
0
713
Oct ’24
Download AI 18.2
I've been waiting for confirmation for a long time, it finally seems to have arrived, but now the software for working with Playground and Genmoji is not downloaded. For the past 8 hours, the phone has been connected to the network, it has been on hold for more than 20 minutes, everything has passed by. The most interesting thing is there is no rebounding how much to swing
2
0
542
Oct ’24
How to Fine-Tune the SNSoundClassifier for Custom Sound Classification in iOS?
Hi Apple Developer Community, I’m exploring ways to fine-tune the SNSoundClassifier to allow users of my iOS app to personalize the model by adding custom sounds or adjusting predictions. While Apple’s WWDC session on sound classification explains how to train from scratch, I’m specifically interested in using SNSoundClassifier as the base model and building/fine-tuning on top of it. Here are a few questions I have: 1. Fine-Tuning on SNSoundClassifier: Is there a way to fine-tune this model programmatically through APIs? The manual approach using macOS, as shown in this documentation is clear, but how can it be done dynamically - within the app for users or in a cloud backend (AWS/iCloud)? Are there APIs or classes that support such on-device/cloud-based fine-tuning or incremental learning? If not directly, can the classifier’s embeddings be used to train a lightweight custom layer? Training is likely computationally intensive and drains too much on battery, doing it on cloud can be right way but need the right apis to get this done. A sample code will do good. 2. Recommended Approach for In-App Model Customization: If SNSoundClassifier doesn’t support fine-tuning, would transfer learning on models like MobileNetV2, YAMNet, OpenL3, or FastViT be more suitable? Given these models (SNSoundClassifier, MobileNetV2, YAMNet, OpenL3, FastViT), which one would be best for accuracy and performance/efficiency on iOS? I aim to maintain real-time performance without sacrificing battery life. Also it is important to see architecture retention and accuracy after conversion to CoreML model. 3. Cost-Effective Backend Setup for Training: Mac EC2 instances on AWS have a 24-hour minimum billing, which can become expensive for limited user requests. Are there better alternatives for deploying and training models on user request when s/he uploads files (training data)? 4. TensorFlow vs PyTorch: Between TensorFlow and PyTorch, which framework would you recommend for iOS Core ML integration? TensorFlow Lite offers mobile-optimized models, but I’m also curious about PyTorch’s performance when converted to Core ML. 5. Metrics: Metrics I have in mind while picking the model are these: Publisher, Accuracy, Fine-Tuning capability, Real-Time/Live use, Suitability of iPhone 16, Architectural retention after coreML conversion, Reasons for unsuitability, Recommended use case. Any insights or recommended approaches would be greatly appreciated. Thanks in advance!
6
1
1.3k
Oct ’24