After installing IOS 18.2 public beta, the Apple intelligence been stuck on downloading and now I’m Back to the old Siri. Which kinda sucks. lol
Any help or tips to get my APPLE INTELLIGENCE back?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I updated to iOS 18.2 beta 2, and with it, I am able to see behind the early access requested by just swiping down, but when I do swipe down, the app closes. Does this mean I'm close to getting access?
Before I updated to 18.2, I was missing Apple Intelligence features such as chatgpt integration. Now that I updated to 18.2. I’m still missing Apple Intelligence and ChatGPT integration altogether and now my Siri turned into the old Siri. What can I do?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Image Playground Error: Cannot find protocol declaration for 'ImageGenerationViewControllerDelegate'
@available(macCatalyst 18.1, *)
@available(iOS 18.1, *)
extension CKImageSelectionManager: ImagePlaygroundViewController.Delegate {
public func imagePlaygroundViewController(_ imagePlaygroundViewController: ImagePlaygroundViewController, didCreateImageAt imageURL: URL) {
}
func presentImagePlayground() {
let imagePlaygroundVC = ImagePlaygroundViewController()
// Set delegate to self to receive the callback
imagePlaygroundVC.delegate = self
imagePlaygroundVC.isModalInPresentation = true // Prevents dismissal with swipe if needed
self.delegate?.presentImageSelectionViewController(imagePlaygroundVC)
}
}
This generates an error in the xcode generated swift header.
I am working to add Spotlight indexing for my app entities as discussed in WWDC24's video "What's New in App Intents".
That video goes over the IndexedEntity protocol and the integration with Spotlight via CSSearchableItemAttributeSet.
What I'm seeing though does not match the video. In the video, the presenter goes through the sort of progressive approach you can take to getting this data into Spotlight starting with the basics and then expanding to include more support depending on how much the developer wants to do.
What I'm seeing is that if you conform to IndexedEntity, your entities will appear in Spotlight using the name derived from
public var displayRepresentation: DisplayRepresentation
So, that works. Name appears... BUT the next part of the video goes into how to expand your implementation with more metadata for Spotlight via CSSearchableItemAttributeSet. The issue I'm seeing is that once that's implemented, the items disappear from Spotlight, almost like that implementation is overriding the base implementation in a way that no longer functions.
My expectation is that an item with custom attributes would use them in Spotlight as appropriate, not disappear from search, i.e. what's shown in the video should work.
I've got a sample project here:
https://hanchor.s3.amazonaws.com/misc/IndexingTest.zip
To reproduce with the sample:
Build and run. Indexing is setup in the init() method so it will just run.
Go to Spotlight and search for 'Huntersblau', a string included in the content set. At this point you should see a result - good!
Stop the app and go back and uncomment the var attributeSet: CSSearchableItemAttributeSet implementation in IndexingTestApp.swift. This will provide custom attributes to Spotlight.
Repeat steps 1 and 2 - you'll see now, it no longer appears in the search results - when CSSearchableItemAttributeSet is implemented, the item drops out of Spotlight.
https://developer.apple.com/machine-learning/models/
Adding the DepthAnythingV2SmallF16.mlpackage to a new project in Xcode 16.1 and invoking the class crashes the app.
Anyone else having the same issue?
I tried Xcode 16.2 beta and it has the same response.
Code
import UIKit
import CoreML
class ViewController : UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
do {
// Use a default model configuration.
let defaultConfig = MLModelConfiguration()
// app crashes here
let model = try? DepthAnythingV2SmallF16(
configuration: defaultConfig
)
} catch {
//
}
}
}
Response
/AppleInternal/Library/BuildRoots/4b66fb3c-7dd0-11ef-b4fb-4a83e32a47e1/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:129: failed assertion Error: unhandled platform for MPSGraph serialization'
`
Topic:
Machine Learning & AI
SubTopic:
Core ML
I have been attempting to achieve the early access image creation (iPad iOS 18.2) and it has taken days, I still don’t have it and feel I never will. Someone heeelelllppppp!
Hello,
I’m attempting to convert a TensorFlow model to CoreML using the coremltools package, but I’m encountering an error during the conversion process. The error traceback points to an issue within the Cast operation in the MIL (Model Intermediate Layer) when it tries to perform type inference:
AttributeError: 'float' object has no attribute 'astype'
Here is the relevant part of the error traceback:
File ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/elementwise_unary.py", line 896, in get_cast_value
return input_var.val.astype(dtype=type_map[dtype_val])
I’ve tried converting a model from the yamnet-tensorflow2 repository, and this error occurs when CoreML tries to cast a float type during the conversion of certain operations. I’m currently using Python 3.10 and coremltools version 6.0.1, with TensorFlow 2.x.
Has anyone encountered a similar issue or can offer suggestions on how to resolve this?
I’ve also considered that this might be related to mismatches in the model’s data types, but I’m not sure how to proceed.
Platform and package versions:
coremltools 6.1
tensorflow 2.10.0
tensorflow-estimator 2.10.0
tensorflow-hub 0.16.1
tensorflow-io-gcs-filesystem 0.37.1
Python 3.10.12
pip 24.3.1 from ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/pip (python 3.10)
Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64
Any help or pointers would be greatly appreciated!
It been hours still no acess to image playground
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Apple intelligence not working at all. Was working ok under 18.1 but nothing under 18.2??
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I have been on the Bata 18.2 since it launched and I can not seem to get the playground keeps saying I’ve requested. Why! Please help
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I have a model that uses a CoreML delegate, and I’m getting the following warning whenever I set the model to nil. My understanding is that CoreML is creating a cache in the app’s storage but is having issues clearing it. As a result, the app’s storage usage increases every time the model is loaded.
This StackOverflow post explains the problem in detail: App Storage Size Increases with CoreML usage
This is a critical issue because the cache will eventually fill up the phone’s storage:
doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/mobile/Containers/Data/Application/22DDB13E-DABA-4195-846F-F884135F37FE/tmp/F38A9824-3944-420C-BD32-78CE598BE22D-10125-00000586EFDFD7D6.mlmodelc/ : sourceURL= (null) : key={"isegment":0,"inputs":{"0_0":{"shape":[256,256,1,3,1]}},"outputs":{"142_0":{"shape":[16,16,1,222,1]},"138_0":{"shape":[16,16,1,111,1]}}} : identifierSource=0 : cacheURLIdentifier=E0CD0F44FB0417936057FC6375770CFDCCC8C698592ED412DDC9C81E96256872_C9D6E5E73302943871DC2C610588FEBFCB1B1D730C63CA5CED15D2CD5A0AC0DA : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 } : state=3 : programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 : attr={
ANEFModelDescription = {
ANEFModelInput16KAlignmentArray = (
);
ANEFModelOutput16KAlignmentArray = (
);
ANEFModelProcedures = (
{
ANEFModelInputSymbolIndexArray = (
0
);
ANEFModelOutputSymbolIndexArray = (
0,
1
);
ANEFModelProcedureID = 0;
}
);
kANEFModelInputSymbolsArrayKey = (
"0_0"
);
kANEFModelOutputSymbolsArrayKey = (
"138_0@output",
"142_0@output"
);
kANEFModelProcedureNameToIDMapKey = {
net = 0;
};
};
NetworkStatusList = (
{
LiveInputList = (
{
BatchStride = 393216;
Batches = 1;
Channels = 3;
Depth = 1;
DepthStride = 393216;
Height = 256;
Interleave = 1;
Name = "0_0";
PlaneCount = 3;
PlaneStride = 131072;
RowStride = 512;
Symbol = "0_0";
Type = Float16;
Width = 256;
}
);
LiveOutputList = (
{
BatchStride = 113664;
Batches = 1;
Channels = 111;
Depth = 1;
DepthStride = 113664;
Height = 16;
Interleave = 1;
Name = "138_0@output";
PlaneCount = 111;
PlaneStride = 1024;
RowStride = 64;
Symbol = "138_0@output";
Type = Float16;
Width = 16;
},
{
BatchStride = 227328;
Batches = 1;
Channels = 222;
Depth = 1;
DepthStride = 227328;
Height = 16;
Interleave = 1;
Name = "142_0@output";
PlaneCount = 222;
PlaneStride = 1024;
RowStride = 64;
Symbol = "142_0@output";
Type = Float16;
Width = 16;
}
);
Name = net;
}
);
} : perfStatsMask=0} was not loaded by the client.
I would like to explore the option of restricting system prompts to share information with ChatGPT on my company's laptops. Is this possible, and if so, how can it be implemented? Should this be centralized, or should each user handle it individually?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
App Tracking Transparency
Device Management
Woke up to a notification saying playground, Genmoji…etc was ready. but every time I try to use it says early access was requested. Anyone else had this issue? if so how did you fix it?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I Got Access and Then I opened the App and it said still waiting I was very sad
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
When I pressed an early access a few days ago and when I check it it still says we will notify you when it is ready can apple please fix this problem with image playground
Where can I find an example of using this MPSGraph function? I'm trying to use it to paste an image into a larger canvas at certain coordinates.
func sliceUpdateDataTensor(
_ dataTensor: MPSGraphTensor,
update updateTensor: MPSGraphTensor,
starts: [NSNumber],
ends: [NSNumber],
strides: [NSNumber],
startMask: UInt32,
endMask: UInt32,
squeezeMask: UInt32,
name: String?
) -> MPSGraphTensor
How long does it usually take to get access to image playground. Its been about a week since I got IOS 18.2 public beta and still am waiting for access to the image playground. When I got apple intelligence only took a few hours.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
My Playground app has been stuck at Downloading support for Image Playground now for about 4-5 days. I've reinstalled the app, restarted phone multiple times, reset network settings 2-3 times. Tried LTE/5G and Different WIFI networks and it still sat showing Downloading Support.
Anyone else having this issue?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hi all,
When executing an HLO program using the JAX metal PJRT plugin, the program fails due to an unsupported data type returned by the rng_bit_generator operation.
The generated HLO includes:
%output_state, %output = "mhlo.rng_bit_generator"(%1) <{rng_algorithm = #mhlo.rng_algorithm<PHILOX>}> : (tensor<3xi64>) -> (tensor<3xi64>, tensor<3xui32>)
The error message indicates that:
Metal only supports MPSDataTypeFloat16, MPSDataTypeBFloat16, MPSDataTypeFloat32, MPSDataTypeInt32, and MPSDataTypeInt64.
The use of ui32 seems to be incompatible with Metal’s allowed types.
I’m trying to understand if the ui32 output is the problem or maybe the use of rng_bit_generator is wrong.
Could you clarify if there is a workaround or planned support for ui32 output in this context? Alternatively, guidance on configuring rng_bit_generator for compatibility with Metal’s supported types would be greatly appreciated.