Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Created

Almost 48 Hours, Still No Access to Playground.
Just wanted to reach out to see if this is the norm. I see several posts saying people are still waiting for the early access playground app, what’s going on? it’s been almost 48 hours and I’ve received nothing. If this is the norm, then so be it…but even when I had to wait for the Apple Intelligence early access that was only a few hours. Hopefully, this will be resolved quickly. I mean what’s the point of being developer beta testers, if we can’t test the beta?
3
2
502
Oct ’24
From Apple Dev.
Early access to Image Playground, Genmoji, and Image Wand  Apple Oct 25, 2024 at 5:58 PM With the iOS & iPadOS 18.2 and macOS Sequoia 15.2 betas, you can join the waitlist for early access to Image Playground, Genmoji, and Image Wand in order to test and help improve these features. You can request access within any one of these experiences: Image Playground app Image Playground integration in Messages or Freeform Genmoji integration in the emoji keyboard, or Image Wand within the Apple Pencil tool palette in Notes We will roll out access to Image Playground, Genmoji, and Image Wand over the coming weeks. When the features are ready for you to test, you will be notified. After you receive access, you can tap the thumbs up or thumbs down that appear with each result in Image Playground, Genmoji, and Image Wand in order to provide feedback.
0
0
386
Oct ’24
NLtagger not filtering words such as "And, to, a, in"
what am I not understanding here. in short the view loads text from the jsons descriptions and then should filter out the words. and return and display a list of most used words, debugging shows words being identified by the code but does not filter them out private func loadWordCounts() { DispatchQueue.global(qos: .background).async { let fileManager = FileManager.default guard let documentsDirectory = try? fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) else { return } let descriptions = loadDescriptions(fileManager: fileManager, documentsDirectory: documentsDirectory) var counts = countWords(in: descriptions) let tagsToRemove: Set<NLTag> = [ .verb, .pronoun, .determiner, .particle, .preposition, .conjunction, .interjection, .classifier ] for (word, _) in counts { let tagger = NLTagger(tagSchemes: [.lexicalClass]) tagger.string = word let (tag, _) = tagger.tag(at: word.startIndex, unit: .word, scheme: .lexicalClass) if let unwrappedTag = tag, tagsToRemove.contains(unwrappedTag) { counts[word] = 0 } } DispatchQueue.main.async { self.wordCounts = counts } } }
0
0
549
Oct ’24
Issue with iOS 18.2 Beta - "Ask" Button Not Working in Visual Intelligence
Hey everyone, I recently installed the iOS 18.2 developer beta and connected my ChatGPT Premium account to use it within the Visual Intelligence feature. After taking a photo, I tried pressing the "Ask" button, but it’s completely unresponsive. I've tried troubleshooting by doing a hard reset and a regular restart, but no luck so far. Has anyone else encountered this issue? If so, do you know of any workaround or fix that might help? Thanks in advance!
2
0
1.9k
Oct ’24
Issues with Statsmodels
When I import starts models in Jupyter notebook, I ge the following error: ImportError: dlopen(/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib Referenced from: &lt;5ACBAA79-2387-3BEF-9F8E-6B7584B0F5AD&gt; /opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so Reason: tried: '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache). What should I do?
1
0
684
Oct ’24
Core ML Model Performance report errors when include GPU/Neural Engine in compute unit selection
Hi, while trying to diagnose why some of my Core ML models are running slower when their configuration is set with compute units .CPU_AND_GPU compared to running with .CPU_ONLY I've been attempting to create Core ML model performance reports in Xcode to identify the operations that are not compatible with the GPU. However, when selecting an iPhone as the connected device and compute unit of 'All', 'CPU and GPU' or 'CPU and Neural Engine' Xcode displays one of the following two error messages: “There was an error creating the performance report. The performance report has crashed on device” "There was an error creating the performance report. Unable to compute the prediction using ML Program. It can be an invalid input data or broken/unsupported model." The performance reports are successfully generated when selecting the connected device as iPhone with compute unit 'CPU only' or Mac with any combination of compute units. Some of the models I have found the issue to occur with are stateful, some are not. I have tried to replicate the issue with some example models from the CoreML tools stateful model guide/video Bring your machine learning and AI models to Apple silicon. Running the performance report on a model generated from the Simple Accumulator example code the performance report is created successfully when trying all compute unit options, but using models from the toy attention and toy attention with kvcache examples it is only successful with compute units as 'CPU only' when choosing iPhone as the device. Versions I'm currently working with: Xcode Version 16.0 MacOS Sequoia 15.0.1 Core ML Tools 8.0 iPhone 16 Pro iOS 18.0.1 Is there a way to avoid these errors? Or is there another way to identify which operations within a CoreML model are supported to run on iPhone GPU/Neural engine?
0
0
719
Oct ’24