I’m concerned because my iCloud account was recently migrated to AWS (Amazon Web Service) against my will, and now it seems.like people are rummaging through my files, photos, and mail, When I try to contact Apple Support, I get bumped to a spoofed site. Calling the hotline is the same, I get a Siri operator with platitudes and gaslighting but no action. I have run sysdiagnose and it looks really sketchy.
Can anyone help?
Overview
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Does anyone know if the resources .copy rule in a Swift .package file is supposed to recursively copy the full contents if it's pointed at a directory?
The docs say…
If you pass a directory path to the copy rule, the compiler retains the directory’s structure.
…but you can interpret that in a few different ways.
It also doesn’t appear to work if the directory you specify only contains directories.
Are we allowed to use AI-generated images for our Swift Student Challenge Project? The instructions say that we can use third party content but must cite/credit it and be transparent about it. Are we allowed to do the same for AI-generated images? Will this dock points on creativity?
Your app still contains features that mimic the iOS interface or behavior.
I have a simple app that uses a NavigationSplitView 3 panels
I have a section for Filters and User created Categories in Panel 1
A list of "Requests" from the selected Filters/Categories in Panel 2
and details of a request in Panel 3
It's designed to be simple and easy to use. How can it NOT "mimic the iOS interface" if I am using their own APIs?
What should I do to get around this
I am using SwiftData with CloudKit to synchronize data across multiple devices, and I have encountered an issue: occasionally, abnormal sync behavior occurs between two devices (it does not happen 100% of the time—only some users have reported this problem). It seems as if synchronization between the two devices completely stops; no matter what operations are performed on one end, the other end shows no response.
After investigating, I suspect the issue might be caused by both devices simultaneously modifying the same field, which could lead to CloudKit's logic being unable to handle such conflicts and causing the sync to stall. Are there any methods to avoid or resolve this situation?
Of course, I’m not entirely sure if this is the root cause. Has anyone encountered a similar issue?
I have an iOS application that derives on UITextInput to enter text. I have also overridden pressesBegan() and pressesEnded() in order to have some extra keyboard management (auto-repeat, special actions for arrow keys, function keys...)
That works well for single-character languages (most roman languages, such as English, French, etc).
For multi-character languages (Chinese, Japanses, Korean, Hindi), I can detect that the keyboard has been set to that language, and switch back to the default version of pressesBegan():
if let keyboardLanguage = self.textInputMode?.primaryLanguage {
if (keyboardLanguage.hasPrefix("hi") || keyboardLanguage.hasPrefix("zh") || keyboardLanguage.hasPrefix("ja") || keyboardLanguage.hasPrefix("ko")) {
super.pressesBegan(presses, with: event)
}
}
But that strategy fails with the Tiếng Việt Telex keyboard (for Vietnamese language input). The way that keyboard works (as you can see if you open a document in Pages) is that you type as you go: T-i-e-n-g V-i-e-t T-e-l-e-x and the system adds the relevant diacritics once you've finished a word, so typing "Tieng Viet Telex" gives you "Tiếng Việt Telex" on the screen.
Is there any documentation on the inner workings of this specific keyboard? What should I do (or not do) in order to make my application compatible with Tiếng Việt Telex?
Hi everyone,
I am currently developing an app for my Swift Student Challenge submission. One of the key features of my app is visualizing user progress over time using SwiftData and Swift Charts.
I have a concern regarding the first-time experience for the reviewer. Since the app relies on accumulated data to display meaningful trends, the dashboard/charts will appear empty on the very first launch, which might not fully showcase the visualization logic I’ve implemented.
To ensure the reviewer can immediately grasp the app's potential within their limited review window, I am considering generating pre-populated sample data (mock data) only on the initial launch.
Does Apple generally recommend including sample data in a challenge submission to better demonstrate UI/UX and data visualization capabilities?
Or is it strictly preferred to present a "fresh" empty state, as a real first-time user would see it?
I want to make sure I am following the best practices for the challenge while highlighting my technical implementation of the SwiftData and Charts frameworks.
Thank you in advance for your advice!
When I try to invoke the tkinter module in Python 3 that is bundled with Xcode Developer Tools, I get a message saying that my system version is too low:
$ /usr/bin/python3 -m tkinter
macOS 26 (2602) or later required, have instead 16 (1602) !
zsh: abort /usr/bin/python3 -m tkinter
It seems like the system version reported is macOS 16, which I assume is the version code before the decision to rename all OS platforms to 26. This is a very low level mistake and should be fixed as soon as possible.
Topic:
Programming Languages
SubTopic:
General
Fatal Exception: NSInternalInconsistencyException
Cannot remove an observer <WKWebView 0x135137800> for the key path "configuration.enforcesChildRestrictions" from <STScreenTimeConfigurationObserver 0x13c6d7460>, most likely because the value for the key "configuration" has changed without an appropriate KVO notification being sent. Check the KVO-compliance of the STScreenTimeConfigurationObserver [class.]
I noticed that on iOS 26, WKWebView registers STScreenTimeConfigurationObserver, Is this an iOS 26 system issue? What should I do?
I'm experimenting with Foundation Models and I'm trying to understand how to define a Tool whose input argument is defined at runtime. Specifically, I want a Tool that takes a single String parameter that can only take certain values defined at runtime.
I think my question is basically the same as this one: https://developer.apple.com/forums/thread/793471 However, the answer provided by the engineer doesn't actually demonstrate how to create the GenerationSchema. Trying to piece things together from the documentation that the engineer linked to, I came up with this:
let citiesDefinedAtRuntime = ["London", "New York", "Paris"]
let citySchema = DynamicGenerationSchema(
name: "CityList",
properties: [
DynamicGenerationSchema.Property(
name: "city",
schema: DynamicGenerationSchema(
name: "city",
anyOf: citiesDefinedAtRuntime
)
)
]
)
let generationSchema = try GenerationSchema(root: citySchema, dependencies: [])
let tools = [CityInfo(parameters: generationSchema)]
let session = LanguageModelSession(tools: tools, instructions: "...")
With the CityInfo Tool defined like this:
struct CityInfo: Tool {
let name: String = "getCityInfo"
let description: String = "Get information about a city."
let parameters: GenerationSchema
func call(arguments: GeneratedContent) throws -> String {
let cityName = try arguments.value(String.self, forProperty: "city")
print("Requested info about \(cityName)")
let cityInfo = getCityInfo(for: cityName)
return cityInfo
}
func getCityInfo(for city: String) -> String {
// some backend that provides the info
}
}
This compiles and usually seems to work. However, sometimes the model will try to request info about a city that is not in citiesDefinedAtRuntime. For example, if I prompt the model with "I want to travel to Tokyo in Japan, can you tell me about this city?", the model will try to request info about Tokyo, even though this is not in the citiesDefinedAtRuntime array.
My understanding is that this should not be possible – constrained generation should only allow the LLM to generate an input argument from the list of cities defined in the schema.
Am I missing something here or overcomplicating things?
What's the correct way to make sure the LLM can only call a Tool with an input parameter from a set of possible values defined at runtime?
Many thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
(Xcode 26.2, iPhone 17 Pro)
I can't seem to get hardware tag checks to work in an app launched without the special "Hardware Memory Tagging" diagnostics. In other words, I have been unable to reproduce the crash example at 6:40 in Apple's video "Secure your app with Memory Integrity Enforcement".
When I write a heap overflow or a UAF, it is picked up perfectly provided I enable the "Hardware Memory Tagging" feature under Scheme Diagnostics.
If I instead add the Enhanced Security capability with the memory-tagging related entitlements:
I'm seeing distinct memory tags being assigned in pointers returned by malloc (without the capability, this is not the case)
Tag mismatches are not being caught or enforced, regardless of soft mode
The behaviour is the same whether I launch from Xcode without "Hardware Memory Tagging", or if I launch the app by tapping it on launchpad. In case it was related to debug builds, I also tried creating an ad hoc IPA and it didn't make any difference.
I realise there's a wrinkle here that the debugger sets MallocTagAll=1, so possibly it will pick up a wider range of issues. However I would have expected that a straight UAF would be caught. For example, this test code demonstrates that tagging is active but it doesn't crash:
#define PTR_TAG(p) ((unsigned)(((uintptr_t)(p) >> 56) & 0xF))
void *p1 = malloc(32);
void *p2 = malloc(32);
void *p3 = malloc(32);
os_log(OS_LOG_DEFAULT, "p1 = %p (tag: %u)\n", p1, PTR_TAG(p1));
os_log(OS_LOG_DEFAULT, "p2 = %p (tag: %u)\n", p2, PTR_TAG(p2));
os_log(OS_LOG_DEFAULT, "p3 = %p (tag: %u)\n", p3, PTR_TAG(p3));
free(p2);
void *p2_realloc = malloc(32);
os_log(OS_LOG_DEFAULT, "p2 after free+malloc = %p (tag: %u)\n", p2_realloc, PTR_TAG(p2_realloc));
// Is p2_realloc the same address as p2 but different tag?
os_log(OS_LOG_DEFAULT, "Same address? %s\n",
((uintptr_t)p2 & 0x00FFFFFFFFFFFFFF) == ((uintptr_t)p2_realloc & 0x00FFFFFFFFFFFFFF)
? "YES" : "NO");
// Now try to use the OLD pointer p2
os_log(OS_LOG_DEFAULT, "Attempting use-after-free via old pointer p2...\n");
volatile char c = *(volatile char *)p2; // Should this crash?
os_log(OS_LOG_DEFAULT, "Read succeeded! Value: %d\n", c);
Example output:
p1 = 0xf00000b71019660 (tag: 15)
p2 = 0x200000b711958c0 (tag: 2)
p3 = 0x300000b711958e0 (tag: 3)
p2 after free+malloc = 0x700000b71019680 (tag: 7)
Same address? NO
Attempting use-after-free via old pointer p2...
Read succeeded! Value: -55
For reference, these are my entitlements.
[Dict]
[Key] application-identifier
[Value]
[String] …
[Key] com.apple.developer.team-identifier
[Value]
[String] …
[Key] com.apple.security.hardened-process
[Value]
[Bool] true
[Key] com.apple.security.hardened-process.checked-allocations
[Value]
[Bool] true
[Key] com.apple.security.hardened-process.checked-allocations.enable-pure-data
[Value]
[Bool] true
[Key] com.apple.security.hardened-process.dyld-ro
[Value]
[Bool] true
[Key] com.apple.security.hardened-process.enhanced-security-version
[Value]
[Int] 1
[Key] com.apple.security.hardened-process.hardened-heap
[Value]
[Bool] true
[Key] com.apple.security.hardened-process.platform-restrictions
[Value]
[Int] 2
[Key] get-task-allow
[Value]
[Bool] true
What do I need to do to make Memory Integrity Enforcement do something outside the debugger?
It is vital for Apple to refine its OCR models to correctly distinguish between Khmer and Thai scripts. Incorrectly labeling Khmer text as Thai is more than a technical bug; it is a culturally insensitive error that impacts national identity, especially given the current geopolitical climate between Cambodia and Thailand. Implementing a more robust language-detection threshold would prevent these harmful misidentifications.
There is a significant logic flaw in the VNRecognizeTextRequest language detection when processing Khmer script. When the property automaticallyDetectsLanguage is set to true, the Vision framework frequently misidentifies Khmer characters as Thai.
While both scripts share historical roots, they are distinct languages with different alphabets. Currently, the model’s confidence threshold for distinguishing between these two scripts is too low, leading to incorrect OCR output in both developer-facing APIs and Apple’s native ecosystem (Preview, Live Text, and Photos).
import SwiftUI
import Vision
class TextExtractor {
func extractText(from data: Data, completion: @escaping (String) -> Void) {
let request = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation] else {
completion("No text found.")
return
}
let recognizedStrings = observations.compactMap { observation in
let str = observation.topCandidates(1).first?.string
return "{text: \(str!), confidence: \(observation.confidence)}"
}
completion(recognizedStrings.joined(separator: "\n"))
}
request.automaticallyDetectsLanguage = true // <-- This is the issue.
request.recognitionLevel = .accurate
let handler = VNImageRequestHandler(data: data, options: [:])
DispatchQueue.global(qos: .background).async {
do {
try handler.perform([request])
} catch {
completion("Failed to perform OCR: \(error.localizedDescription)")
}
}
}
}
Recognizing Khmer
Confidence Score is low for Khmer text. (The output is in Thai language with low confidence score)
Recognizing English
Confidence Score is high expected.
Recognizing Thai
Confidence Score is high as expected
Issues on Preview, Photos
Khmer text
Copied text
Kouk Pring Chroum Temple [19121 รอาสายสุกตีนานยารรีสใหิสรราภูชิตีนนสุฐตีย์ [รุก
เผือชิษาธอยกัตธ์ตายตราพาษชาณา ถวเชยาใบสราเบรถทีมูสินตราพาษชาณา ทีมูโษา เช็ก
อาษเชิษฐอารายสุกบดตพรธุรฯ ตากร"สุก"ผาตากรธกรธุกเยากสเผาพศฐตาสาย รัอรณาษ"ตีพย"
สเผาพกรกฐาภูชิสาเครๆผู:สุกรตีพาสเผาพสรอสายใผิตรรารตีพสๆ เดียอลายสุกตีน
ธาราชรติ ธิพรหณาะพูชุบละเาหLunet De Lajonquiere ผารูกรสาราพารผรผาสิตภพ ตารสิทูก ธิพิ
คุณที่นสายเระพบพเคเผาหนารเกะทรนภาษเราภุพเสารเราษทีเลิกสญาเราหรุฬารชสเกาก เรากุม
สงสอบานตรเราะากกต่ายภากายระตารุกเตียน
Recommended Solutions
1. Set a Threshold
Filter out the detected result where the threshold is less than or equal to 0.5, so that it would not output low quality text which can lead to the issue.
For example,
let recognizedStrings = observations.compactMap { observation in
if observation.confidence <= 0.5 {
return nil
}
let str = observation.topCandidates(1).first?.string
return "{text: \(str!), confidence: \(observation.confidence)}"
}
2. Add Khmer Language Support
This issue would never happen if the model has the capability to detect and recognize image with Khmer language.
Doc2Text GitHub: https://github.com/seanghay/Doc2Text-Swift
In an AppleScript applet, compiling and exporting in Script Editor replaces a custom icon with the default. To retain a custom icon, it is necessary, after exporting, to use Finder's "Get info..." to copy the icon from another file and paste into the icon for the applet. The custom icon is stored in the "Icon?" file, located in the root of the applet bundle. The applet can then be signed and notarized.
With macOS Tahoe, that procedure no longer works. That is because the notarization process now wipes the "Icon?" file. The file remains in place but has zero size. Thus Finder shows the default applet icon.
Does anyone know of a way to provide a custom icon for a signed and notarized AppleScript applet ?
Hi,
I was testing the new iOS 18 behavior where NSPersistentCloudKitContainer wipes the local Core Data store if the user logs out of iCloud, for privacy purposes.
I ran the tests both with a Core Data + CloudKit app, and a simple one using SwiftData with CloudKit enabled. Results were identical in either case.
In my testing, most of the time, the feature worked as expected. When I disabled iCloud for my app, the data was wiped (consistent with say the Notes app, except if you disable iCloud it warns you that it'll remove those notes). When I re-enabled iCloud, the data appeared. (all done through the Settings app)
However, in scenarios when NSPersistentCloudKitContainer cannot immediately sync -- say due to rate throttling -- and one disables iCloud in Settings, this wipes the local data store and ultimately results in data loss.
This occurs even if the changes to the managed objects are saved (to the local store) -- it's simply they aren't synced in time.
It can be a little hard to reproduce the issue, especially since when you exit to the home screen from the app, it generally triggers a sync. To avoid this, I swiped up to the screen where you can choose which apps to close, and immediately closed mine. Then, you can disable iCloud, and run the app again (with a debugger is helpful). I once saw a message with something along the lines of export failed (for my record that wasn't synced), and unfortunately it was deleted (and never synced).
Perhaps before NSPersistentCloudKitContainer wipes the local store it ought to force sync with the cloud first?
I have a single multiplatform application that I use NSPersistentCloudKitContainer on.
This works great, except I noticed when I open two instances of the same process (not windows) on the same computer, which share the same store, data duplication and "Metadata Inconsistency" errors start appearing.
This answer (https://stackoverflow.com/a/67243833) says this is not supported with NSPersistentCloudKitContainer.
Is this indeed true?
If it isn't allowed, is the only solution to disable multiple instances of the process via a lock file? I was thinking one could somehow coordinate a single "leader" process that syncs to the cloud, with the others using NSPersistentContainer, but this would be complicated when the "leader" process terminates.
Currently, it seems iPad split views are new windows, not processes -- but overall I'm still curious :0
Thank you!
What OS will a Swift Student Challenge submission run on? I want to use iOS 26 features but the version history for Swift Playground doesn’t show it being updated past the iOS 17.5 SDK. So, can I still use features from the iOS 26 SDK?
I'm reading the "Testing Age Assurance in Sandbox" doc, but I couldn't figure out the step:
2. Tap Sandbox Testing from the main menu
Where is the "main menu"?
We are integrating Apple’s DeclaredAgeRange SDK. To comply with relevant regulatory requirements, our understanding is as follows:
The app is only required to obtain the declared age range for users located in Texas.
For users outside of Texas, we should not proactively request age range information.
Accordingly, we would like to confirm the following:
Are we required to present the age range request prompt to all users in the United States?
If yes, we are concerned that this may significantly impact the overall user experience.
If it is permissible to request age range only for Texas users, how can we reliably determine whether a user is located in Texas on the client side?
For example, does Apple provide an API or recommended method for accurately identifying a user’s region (specifically Texas)?
Happy new year to all!
I have created an iOS app that also runs on Apple Vision Pro.
On iOS, when you activate the fileImporter modal, you can swipe down the modal in iOS to dismiss.
However, in visionOS, this same modal CANNOT be swiped down to cancel/dismiss. If you are drilled deep into a file hierarchy, you have to navigate back to the top level to tap X to dismiss.
Is there a way to add swipe down to the visionOS implementation of fileImporter, or any other workaround so the user doesn't have to navigate back to the top to dismiss?
Again, this is not a visionOS app but an iOS app compatible for use in Vision Pro.
Thanks!
Hi everyone,
I subscribed to the Apple Developer Program on Tuesday evening, November 4th, 2025. The payment has already been charged to my bank account, but my account still shows the status “Pending” with the message “Subscribe your membership”.
It’s now been several days, and I haven’t received any confirmation email or any request for additional information.
I already contacted Apple Support by email, but I’d like to know if other developers have experienced the same situation and how long it took before their account was activated.
Thanks in advance for your help and feedback!
— Martin