I installed beta 18.2 on the day of release and requested access to playground that day 2 min after I got in it is now a week later
and I still do not have access am I missing something ?
16 pro max
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Post
Replies
Boosts
Views
Activity
What are the major differences in review process for AI based apps vis a vis normal apps for Apple store?
I've been eagerly awaiting access to the Playground app since its launch day. Is anyone else still stuck on the waitlist?
I have requested early access for more than 10 days why still haven’t get access to it? Is there any problem on the system? Why granting access to Image playground take so long?
I have an existing photo generation app that i was looking to switch to this new api, to create specific images. How are people finding it to develop with in terms of ease of implementation?
Am I the only one still waiting for the waitlist? I enrolled 2 days after the release but I didn’t think it would take this long and seems like everyone else is getting in but me
On the October 10/28 release day of Apple Intelligence I opted in. My iPhone and iPad immediately went to "waitlist" and within 2 to 3 hours were ready to initialize Apple Intelligence.
My MacBook Pro 14" with M3 Pro processor and 18 GB or RAM has been stuck on "preparing" since release day (6 days now).
I've tried numerous workarounds that I found on forums as well as talking to Apple support, who basically had me repeat the workarounds that I found on forums.
I've tried changing region to an area that does not have Apple Intelligence and then back to the US, I've changed Siri language to an unsupported one and back to a supported one, and I have tried disabling background/startup Apps, I've disabled and reenabled Siri. Oh, I've restarted a bunch and let the Mac alone for hours at a time.
I've noticed that my selected Siri voice seems to not download.
Finally, after several chats and calls with Apple support, I was told that it's Beta software, they can't help me, and I should try the developer forums.... so here I am. Any advice?
I’ve found that I can’t generate any faces of people in playground (and in genmoji, to a lesser extent) that aren’t smiling with the biggest Tiger Woods/Diddy teeth.
It’s annoying. Even when you expressly ask for frowns, or angry faces, you get these big goofy smiles.
Any help would be much appreciated.
So recently i updated from 18.1 to 18.2 beta on 15 pro max. I lost my access to apple intelligence. I was excited to see the use the image playground in 18.2 update but it need apple intelligence to use that app but it went back to older siri version. what are tye solutions now because i do not have any access to apple intelligence even though i have ios 18.2?
Can access to SoundAnalysis (sound classifier built into next version of MacOS, iOS, WatchOS) be provided to my app running in the background on iPhone or Apple Watch?
I want to monitor local sounds from Apple Watch and iPhones and take remote action for out of band data (ie. send alert to caregiver if coughing rate is too high, or if someone is knocking on the door for more than a minute, etc.)
I downloaded iOS 18.2 the day it came out and requested early access for Image Playground, but it's been more than a week and the early access request has still not been accepted. I've seen some people get accepted within a day. This problem is starting to get annoying.
I downloaded IOS 18.2 dev beta to try out some of the features, on my iPhone 15 Pro, iOS 18.1’s features took about 3 hours, but this one, ITS BEEN 8 DAYS. 8 DAYS APPLE. STILL. NO. NEW. FEATURES!
Hello Guys
does anybody know how long this request need to get approved?
I am waiting now since beta 1 of 18.2 is out
has anyone else been waiting for 7+ days to get off the waitlist. The original waitlist for just regular Apple intelligence accepted everyone within a few hours, this one is taking days. Is there any way to speed up the process. I am kind of disappointed with Apple in this sense. A huge company cant even give you the features they said they promised, when we updated shouldn’t it have already been on our phones?
Hello All,
I'm developing a machine learning model for image classification, which requires managing an exceptionally large dataset comprising over 18,000 classes. I've encountered several hurdles while using Create ML, and I would appreciate any insights or advice from those who have faced similar challenges.
Current Issues:
Create ML Failures with Large Datasets:
When using Create ML, the process often fails with errors such as "Failed to create CVPixelBufferPool." This issue appears when handling particularly large volumes of data.
Custom Implementation Struggles:
To bypass some of the limitations of Create ML, I've developed a custom solution leveraging the MLImageClassifier within the CreateML framework in my own SwiftUI MacOS app.
Initially I had similar errors as I did in Create ML, but I discovered I could move beyond the "extracting features" stage without crashing by employing a workaround: using a timer to cancel and restart the job every 30 seconds. This method is the only way I've been able to finish the extraction phase, even with large datasets, but it causes many errors in the console if I allow it to run too long.
Lack of Progress Reporting:
Using MLJob<MLImageClassifier>, I've noticed that progress reporting stalls after the feature extraction phase. Although system resources indicate activity, there is no programmatic feedback on what is occurring.
Things I've Tried:
Data Validation: Ensured that all images in the dataset are valid and non-corrupted, which helps prevent unnecessary issues from faulty data.
Custom Implementation with CreateML Framework: Developed a custom solution using the MLImageClassifier within the CreateML framework to gain more control over the training process.
Timer-Based Workaround: Employed a workaround using a timer to cancel and restart the job every 30 seconds to move past the "extracting features" phase, allowing progress even with larger datasets.
Monitoring System Resources: Observed ongoing system resource usage when process feedback stalled, confirming background processing activity despite the lack of progress reporting.
Subset Testing: Successfully created and tested a model on a subset of the data, which validated the approach worked for smaller datasets and could produce a functioning model.
Router Model Concept: Considered training multiple models for different subsets of data and implementing a "router" model to decide which specialized model to utilize based on input characteristics.
What I Need Help With:
Handling Large Datasets:
I'm seeking strategies or best practices for effectively utilizing Create ML with large datasets.
Any guidance on memory management or alternative methodologies would be immensely helpful.
Improving Progress Reporting:
I'm looking for ways to obtain more consistent and programmatic progress updates during the training and testing phases.
I'm working on a Mac M1 Pro w/ 32GB RAM, with Apple Silicon and am fully integrated within the Apple ecosystem. I am very grateful for any advice or experiences you could share to help overcome these challenges.
Thank you!
I've pasted the relevant code below:
func go() {
if self.trainingSession == nil {
self.trainingSession = createTrainingSession()
}
if self.startTime == nil {
self.startTime = Date()
}
job = try! MLImageClassifier.resume(self.trainingSession)
job.phase
.receive(on: RunLoop.main)
.sink { phase in
self.phase = phase
}
.store(in: &cancellables)
job.checkpoints
.receive(on: RunLoop.main)
.sink { checkpoint in
self.state = "\(checkpoint)\n\(self.job.progress)"
self.progress = self.job.progress.fractionCompleted + 0.2
self.updateTimeEstimates()
}
.store(in: &cancellables)
job.result
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: { completion in
switch completion {
case .failure(let error):
print("Training Failed: \(error.localizedDescription)")
case .finished:
print("🎉🎉🎉🎉 TRAINING SESSION FINISHED!!!!")
self.trainingFinished = true
}
}, receiveValue: { classifier in
Task {
await self.saveModel(classifier)
}
})
.store(in: &cancellables)
}
private func createTrainingSession() -> MLTrainingSession<MLImageClassifier> {
do {
print("Initializing training Data...")
let trainingData: MLImageClassifier.DataSource = .labeledDirectories(at: trainingDataURL)
let modelParameters = MLImageClassifier.ModelParameters(
validation: .split(strategy: .automatic),
augmentation: self.augmentations,
algorithm: .transferLearning(
featureExtractor: .scenePrint(revision: 2),
classifier: .logisticRegressor
)
)
let sessionParameters = MLTrainingSessionParameters(
sessionDirectory: self.sessionDirectoryURL,
reportInterval: 1,
checkpointInterval: 100,
iterations: self.numberOfIterations
)
print("Initializing training session...")
let trainingSession: MLTrainingSession<MLImageClassifier>
if FileManager.default.fileExists(atPath: self.sessionDirectoryURL.path) && isSessionCreated(atPath: self.sessionDirectoryURL.path()) {
do {
trainingSession = try MLImageClassifier.restoreTrainingSession(sessionParameters: sessionParameters)
}
catch {
print("error resuming, exiting.... \(error.localizedDescription)")
fatalError()
}
}
else {
trainingSession = try MLImageClassifier.makeTrainingSession(
trainingData: trainingData,
parameters: modelParameters,
sessionParameters: sessionParameters
)
}
return trainingSession
} catch {
print("Failed to initialize training session: \(error.localizedDescription)")
fatalError()
}
}
I'm returning the following result in one of my AppIntents:
return .result(value: "Done!", dialog: IntentDialog("Speed limit \(speedLimit)"))
With iOS 18.0.1 it was nicely confirming the user the result of their command by saying e.g. "Speed limit 60" and showing it on top of the screen.
With iOS 18.1, it only shows/says "That's done" or "Done" at the bottom of the screen.
Am I missing something that changed in the AppIntents API since iOS 18.1?
someone know how to resolve or how much time it take to get access on playground .
Hi guys, does anyone know how long I will be given permission to use the Playground app because I already have beta 18.2 and I've been waiting for 7 days and I would like to use the app?
I just watched the October 30 MacBook Pro Announcement where they talked about on-device local LLMs for the M4s.
What developer training resources are available, where we can learn how to use custom llm models and build our Swift apps to use both Apple Intelligence and other llm models on device?
Is the guidance to follow MLX github repos, or were those experimental and now there is an approved workflow and tooling?
https://www.youtube.com/watch?v=G0cmfY7qdmY
Is it just me or is early access image playground not available, been waiting for a little over 24hrs and still no access. (no rush for the team if there’s smth wrong) they might be busy rolling out the first few apple intelligence features (ios 18.1) public release.