I installed beta 18.2 on the day of release and requested access to playground that day 2 min after I got in it is now a week later
and I still do not have access am I missing something ?
16 pro max
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
has anyone else been waiting for 7+ days to get off the waitlist. The original waitlist for just regular Apple intelligence accepted everyone within a few hours, this one is taking days. Is there any way to speed up the process. I am kind of disappointed with Apple in this sense. A huge company cant even give you the features they said they promised, when we updated shouldn’t it have already been on our phones?
I'm returning the following result in one of my AppIntents:
return .result(value: "Done!", dialog: IntentDialog("Speed limit \(speedLimit)"))
With iOS 18.0.1 it was nicely confirming the user the result of their command by saying e.g. "Speed limit 60" and showing it on top of the screen.
With iOS 18.1, it only shows/says "That's done" or "Done" at the bottom of the screen.
Am I missing something that changed in the AppIntents API since iOS 18.1?
I just watched the October 30 MacBook Pro Announcement where they talked about on-device local LLMs for the M4s.
What developer training resources are available, where we can learn how to use custom llm models and build our Swift apps to use both Apple Intelligence and other llm models on device?
Is the guidance to follow MLX github repos, or were those experimental and now there is an approved workflow and tooling?
https://www.youtube.com/watch?v=G0cmfY7qdmY
I am working on adding indexing to my App Entities via IndexedEntity. I already, separately index my content via Spotlight.
Watching 'What's New in App Intents', this is covered well but I have a question.
Do I need to implement both CSSearchableItem's associateAppEntity AND also a custom implementation of attributeSet in my IndexedEntity conformance? It seems duplicative but I can't tell from the video if you're supposed to do both or just one or the other.
someone know how to resolve or how much time it take to get access on playground .
Hi guys, does anyone know how long I will be given permission to use the Playground app because I already have beta 18.2 and I've been waiting for 7 days and I would like to use the app?
Hi, I am not been able to use new memory create function post 18.1 beta update.
Below is the error / response showing.
After updating the iPadOS 18.2, I requested for early access to genmoji but I waited for a long time and my request was not accepted. Please tell me how I can do this Apple. Don't make me call another Apple advisor, thank you so much!
Hi Apple Developer Community,
I’m exploring ways to fine-tune the SNSoundClassifier to allow users of my iOS app to personalize the model by adding custom sounds or adjusting predictions. While Apple’s WWDC session on sound classification explains how to train from scratch, I’m specifically interested in using SNSoundClassifier as the base model and building/fine-tuning on top of it.
Here are a few questions I have:
1. Fine-Tuning on SNSoundClassifier:
Is there a way to fine-tune this model programmatically through APIs? The manual approach using macOS, as shown in this documentation is clear, but how can it be done dynamically - within the app for users or in a cloud backend (AWS/iCloud)?
Are there APIs or classes that support such on-device/cloud-based fine-tuning or incremental learning? If not directly, can the classifier’s embeddings be used to train a lightweight custom layer?
Training is likely computationally intensive and drains too much on battery, doing it on cloud can be right way but need the right apis to get this done. A sample code will do good.
2. Recommended Approach for In-App Model Customization:
If SNSoundClassifier doesn’t support fine-tuning, would transfer learning on models like MobileNetV2, YAMNet, OpenL3, or FastViT be more suitable?
Given these models (SNSoundClassifier, MobileNetV2, YAMNet, OpenL3, FastViT), which one would be best for accuracy and performance/efficiency on iOS? I aim to maintain real-time performance without sacrificing battery life. Also it is important to see architecture retention and accuracy after conversion to CoreML model.
3. Cost-Effective Backend Setup for Training:
Mac EC2 instances on AWS have a 24-hour minimum billing, which can become expensive for limited user requests. Are there better alternatives for deploying and training models on user request when s/he uploads files (training data)?
4. TensorFlow vs PyTorch:
Between TensorFlow and PyTorch, which framework would you recommend for iOS Core ML integration? TensorFlow Lite offers mobile-optimized models, but I’m also curious about PyTorch’s performance when converted to Core ML.
5. Metrics:
Metrics I have in mind while picking the model are these: Publisher, Accuracy, Fine-Tuning capability, Real-Time/Live use, Suitability of iPhone 16, Architectural retention after coreML conversion, Reasons for unsuitability, Recommended use case.
Any insights or recommended approaches would be greatly appreciated.
Thanks in advance!
I've been waiting for confirmation for a long time, it finally seems to have arrived, but now the software for working with Playground and Genmoji is not downloaded. For the past 8 hours, the phone has been connected to the network, it has been on hold for more than 20 minutes, everything has passed by. The most interesting thing is there is no rebounding how much to swing
Hi everyone, I work with a company called Dataloop Ai, testing AI features. This is the only feature missing that I need to test. Could you please let me know the estimated waiting time for this feature to be enrolled?
Stuck on Downloading support for Image Playground... Once downloaded, this iPhone will be able to use Image Playground. Does anyone know the solution to continue?
I have recently been having trouble with my iOS 18.2 beta update. It has been 2 weeks since I have updated to iOS 18.2 beta and joined the Genmoji and image playground waitlist. I am wondering how much longer I have to wait till my request is approved.
I’ve been stuck for 40 days . Any ideas, guys?”
After updating the iPadOS 18.2, I requested for early access to genmoji but I waited for a long time and my request was not accepted. Please tell me how I can do this Apple. Don't make me call another Apple advisor, thank you so much!
Hi, while trying to diagnose why some of my Core ML models are running slower when their configuration is set with compute units .CPU_AND_GPU compared to running with .CPU_ONLY I've been attempting to create Core ML model performance reports in Xcode to identify the operations that are not compatible with the GPU. However, when selecting an iPhone as the connected device and compute unit of 'All', 'CPU and GPU' or 'CPU and Neural Engine' Xcode displays one of the following two error messages:
“There was an error creating the performance report. The performance report has crashed on device”
"There was an error creating the performance report. Unable to compute the prediction using ML Program. It can be an invalid input data or broken/unsupported model."
The performance reports are successfully generated when selecting the connected device as iPhone with compute unit 'CPU only' or Mac with any combination of compute units.
Some of the models I have found the issue to occur with are stateful, some are not. I have tried to replicate the issue with some example models from the CoreML tools stateful model guide/video Bring your machine learning and AI models to Apple silicon. Running the performance report on a model generated from the Simple Accumulator example code the performance report is created successfully when trying all compute unit options, but using models from the toy attention and toy attention with kvcache examples it is only successful with compute units as 'CPU only' when choosing iPhone as the device.
Versions I'm currently working with:
Xcode Version 16.0
MacOS Sequoia 15.0.1
Core ML Tools 8.0
iPhone 16 Pro iOS 18.0.1
Is there a way to avoid these errors? Or is there another way to identify which operations within a CoreML model are supported to run on iPhone GPU/Neural engine?
When I import starts models in Jupyter notebook, I ge the following error:
ImportError: dlopen(/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: <5ACBAA79-2387-3BEF-9F8E-6B7584B0F5AD> /opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so
Reason: tried: '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache). What should I do?
I have already updated a few days ago and requested for Genmoji, Image wand and Image playground yet it is still in pending. Really want it to work.
stuck on "early access requested". i feel like since we're developing testers we shouldn't have to go through this process...