Apple, I speak for the majority when I say that we are frustrated, not exactly from the fact that we are unable to access features and test them and submit feedback and etc. but because of the fact that you are not communicating.
If you may, please let us know right here if this is a server bug or if it is initial strategy to rollout the generative features to a small and limited amount of people on IOS18.2 DB1
Thank you!
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have been wating almost 40 hrs or more and still nothing apple please do something how long have you guys been waiting
I requested early access for image playground 24 hours ago and haven't gotten it approved. Are they rolling it out slowly or is it a problem with my phone.
I'm hitting a limit when trying to train an Image Classifier.
It's at about 16k images (in line with the error info) - and it gives the error:
IOSurface creation failed: e00002be parentID: 00000000 properties: {
IOSurfaceAllocSize = 529984;
IOSurfaceBytesPerElement = 4;
IOSurfaceBytesPerRow = 1472;
IOSurfaceElementHeight = 1;
IOSurfaceElementWidth = 1;
IOSurfaceHeight = 360;
IOSurfaceName = CoreVideo;
IOSurfaceOffset = 0;
IOSurfacePixelFormat = 1111970369;
IOSurfacePlaneComponentBitDepths = (
8,
8,
8,
8
);
IOSurfacePlaneComponentNames = (
4,
3,
2,
1
);
IOSurfacePlaneComponentRanges = (
1,
1,
1,
1
);
IOSurfacePurgeWhenNotInUse = 1;
IOSurfaceSubsampling = 1;
IOSurfaceWidth = 360;
} (likely per client IOSurface limit of 16384 reached)
I feel like I was able to use more images than this before upgrading to Sonoma - but I don't have the receipts....
Is there a way around this?
I have oodles of spare memory on my machine - it's using about 16gb of 64 when it crashes...
code to create the model is
let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource),
maxIterations: 25,
augmentation: [],
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
))
let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters)
I have also tried the same training source in CreateML, it runs through 'extracting features', and crashes at about 16k images processed.
Thank you
Topic:
Machine Learning & AI
SubTopic:
Core ML
I‘m excited at the development possibilities presented by Apple Intelligence and have begun imagining retrieval augmented generation use cases. Writing tools suggest that this is possible, but I have not seen any direct statements by Apple regarding use of AFMs for RAG applications. Have any references to APIs or sample code for RAG applications been published?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
what am I not understanding here.
in short the view loads text from the jsons descriptions and then should filter out the words. and return and display a list of most used words, debugging shows words being identified by the code but does not filter them out
private func loadWordCounts() {
DispatchQueue.global(qos: .background).async {
let fileManager = FileManager.default
guard let documentsDirectory = try? fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) else { return }
let descriptions = loadDescriptions(fileManager: fileManager, documentsDirectory: documentsDirectory)
var counts = countWords(in: descriptions)
let tagsToRemove: Set<NLTag> = [
.verb,
.pronoun,
.determiner,
.particle,
.preposition,
.conjunction,
.interjection,
.classifier
]
for (word, _) in counts {
let tagger = NLTagger(tagSchemes: [.lexicalClass])
tagger.string = word
let (tag, _) = tagger.tag(at: word.startIndex, unit: .word, scheme: .lexicalClass)
if let unwrappedTag = tag, tagsToRemove.contains(unwrappedTag) {
counts[word] = 0
}
}
DispatchQueue.main.async {
self.wordCounts = counts
}
}
}
Hey,
I'm EU based (Poland). I'm Apple Developer, and I'm not able to test apple inteligence features, and APIs.
Is it possible to use it somehow? I get it it's not available for EU, but for developers program?
Faz mais de três dias estou aguardando o app playgrounds para testar essas funcionalidades Da Apple intelligence. sem falar que atualizei o sistema e o app playgrounds não apareceu no meu iPhone 15 pro. alguém sabe me dizer onde eu baixo o app playgrounds??
Very simple question, is there a way to detect if a user has Apple intelligence enabled on their Mac?
Id like to make some interface tweaks when it’s available And enable.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hi All,
I am trying to build a new iOS app by following https://developer.apple.com/videos/play/wwdc2024/10163/?time=67
When I trying to remove all legacy VN I am getting error, I would appreciate if someone can help me get up to speed with the new Vision API
When I import starts models in Jupyter notebook, I ge the following error:
ImportError: dlopen(/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: <5ACBAA79-2387-3BEF-9F8E-6B7584B0F5AD> /opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/_fblas.cpython-312-darwin.so
Reason: tried: '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/lib/python3.12/site-packages/scipy/linalg/../../../../liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/opt/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache). What should I do?
Hi, while trying to diagnose why some of my Core ML models are running slower when their configuration is set with compute units .CPU_AND_GPU compared to running with .CPU_ONLY I've been attempting to create Core ML model performance reports in Xcode to identify the operations that are not compatible with the GPU. However, when selecting an iPhone as the connected device and compute unit of 'All', 'CPU and GPU' or 'CPU and Neural Engine' Xcode displays one of the following two error messages:
“There was an error creating the performance report. The performance report has crashed on device”
"There was an error creating the performance report. Unable to compute the prediction using ML Program. It can be an invalid input data or broken/unsupported model."
The performance reports are successfully generated when selecting the connected device as iPhone with compute unit 'CPU only' or Mac with any combination of compute units.
Some of the models I have found the issue to occur with are stateful, some are not. I have tried to replicate the issue with some example models from the CoreML tools stateful model guide/video Bring your machine learning and AI models to Apple silicon. Running the performance report on a model generated from the Simple Accumulator example code the performance report is created successfully when trying all compute unit options, but using models from the toy attention and toy attention with kvcache examples it is only successful with compute units as 'CPU only' when choosing iPhone as the device.
Versions I'm currently working with:
Xcode Version 16.0
MacOS Sequoia 15.0.1
Core ML Tools 8.0
iPhone 16 Pro iOS 18.0.1
Is there a way to avoid these errors? Or is there another way to identify which operations within a CoreML model are supported to run on iPhone GPU/Neural engine?
Issue
When triggering an App Intent using assistant schemas from Apple Intelligence (voice or text) the App opens without prompting for search criteria.
How to repeat
This can be repeated in the example provided by Apple here: https://developer.apple.com/documentation/appintents/making-your-app-s-functionality-available-to-siri
Download the sample code
Build and run on Xcode 16.1 beta 3
Target iPhone 15 Pro Max on iOS 18.1 beta 7
Trigger Apple Intelligence
Enter prompt: "Search AssistantSchemasExample"
Expected behaviour
Apple Intelligence should prompt the user for the criteria and provide this to the App so that the experience is seamless for the end-user. Otherwise Assistant Intents are nothing more than deep links to search screens.
Notes
The example uses @AssistantIntent(schema: .photos.search) intent.
And I've found the issue is also present in other search intents:
@AssistantIntent(schema: .system.search)
@AssistantIntent(schema: .browser.search)
Questions
Has anyone managed to get the prompt to appear?
Will this only function on iOS 18.2?
After updating the iPadOS 18.2, I requested for early access to genmoji but I waited for a long time and my request was not accepted. Please tell me how I can do this Apple. Don't make me call another Apple advisor, thank you so much!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I've been waiting for confirmation for a long time, it finally seems to have arrived, but now the software for working with Playground and Genmoji is not downloaded. For the past 8 hours, the phone has been connected to the network, it has been on hold for more than 20 minutes, everything has passed by. The most interesting thing is there is no rebounding how much to swing
Hi everyone, I work with a company called Dataloop Ai, testing AI features. This is the only feature missing that I need to test. Could you please let me know the estimated waiting time for this feature to be enrolled?
Stuck on Downloading support for Image Playground... Once downloaded, this iPhone will be able to use Image Playground. Does anyone know the solution to continue?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I have recently been having trouble with my iOS 18.2 beta update. It has been 2 weeks since I have updated to iOS 18.2 beta and joined the Genmoji and image playground waitlist. I am wondering how much longer I have to wait till my request is approved.
I’ve been stuck for 40 days . Any ideas, guys?”
After updating the iPadOS 18.2, I requested for early access to genmoji but I waited for a long time and my request was not accepted. Please tell me how I can do this Apple. Don't make me call another Apple advisor, thank you so much!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence