After installed iOS 17 beta 6 image on iPhone 14 Pro Max, the "Developer Mode" disappeared from system settings. Now Xcode doesn't recognize my device because Developer Mode is off. Is there anyway to turn it on?
The previous iOS 17 beta 3 image works fine.
Core OS
RSS for tagExplore the core architecture of the operating system, including the kernel, memory management, and process scheduling.
Post
Replies
Boosts
Views
Activity
My code was working in beta 7 but since I updated I'm getting the following error when trying to init a RealityView
'init(make:update:attachments:)' is unavailable in visionOS
I'm using:
RealityView { content, attachments in
// Code ...
} update: { content, attachments in
// Code ...
} attachments: {
// Code ...
}
When setting up a Finder Sync Extension, even when just using the minimal template given by the File → New → Target → macOS ones provided in Xcode, the right-click menu does not show up within iCloud Drive, while toolbar buttons always work.
Outside iCloud Drive multiple extensions show up when right-clicking on Finder's background.
When right-clicking inside iCloud Drive (here my synced Desktop folder), they do not show up.
Before macOS Sonoma this worked perfectly for me, but it broke beginning with the first beta. No Finder extension from any app (such as the Keka one) work anymore. I have seen it discussed that not more than one Finder extension can be active in a directory, but that is not true as can be seen in the first screenshot.
Q: How can I circumvent this issue?
Hello y'all,
I would like to discuss here if anyone else is noticing that some PDF files are not rendered as expected in iOS/iPadOS 17, it seems that some text with background (screenshot attached) are not rendered and you can only see the back color.
The issue is reproducible on Preview, Safari, where I guess Apple is using PDFKit framework too.
We submitted different issues with Feedback Assistant, however I've not hear back from Apple yet.
Is anyone else able to reproduce the issue?
Thanks,
Can someone confirm that Personal Voice is available for devices running a A12 chipset ? Currently it does not appear in the Speech settings of an IPad Mini 5 or was this feature pulled from the IOS 17 rollout to be released at a later date through an Update…?
There is very limited information on the Internet regarding this feature on A12 Chipsets.
I'm trying to use autofs to mount some macFUSE filesystems. However, autofs requires custom filesystems to provide /sbin/mount_* and this directory is not writable nor modifiable via synthentic.conf
Using a launch agent or daemon is not desirable as there is a non-blocking delay before the filesystem gets mounted which causes a race condition.
Is there any other option to let diskarbitrationd or autofs to automatically mount a macFUSE filesystem?
When users updated iOS to 16.6.1 or 17 app is stucked on splash screen.
Has anyone else experienced issues with Spotlight losing indexing of the Applications?
Spotlight no longer displays any applications in the results when typing in an application name.
Additionally it appears to be ignoring the settings of "Spotlight" set within the preferences. Example: I have "Websites" & "Siri Suggestions" turned off, however these still appear in the search results of spotlight.
I am having troubles placing a model inside a volumetric window.
I have a model - just a simple cube created in Reality Composer Pro that is 0.2m on a side and centered at the origin - and I want to display it in a volumetric window that is 1.0m on a side while preserving the cube's origin 0.2m size.
The small cube seems to be flush against the back and top of the larger volumetric window.
Is it possible to initially position the model inside the volume?
For example, can the model be placed flush against the bottom and front of the volumetric window?
(note: the actual use case is wanting to place 3D terrain (which tends to be mostly flat like a pizza box) flush against the bottom of the volumetric window)
Two problems:
The wallpapers available in MacOS Sonoma (public/GA release) are not working/downloading.
The 'Shuffle Aerials' option to rotate through different wallpapers does not work (presumably because the wallpapers aren't downloading) and the shuffle 'Continuously' option 1) doesn't state how frequently this rotates, and 2) doesn't allow users to configure the interval.
In my app I have the option to enable a help screen. This is a new view that simply shows a .html file.
It works fine until tvOS 16.1
In tvOS 17.0 the screen is blank.
Any ideas?
This is how it looks in tvOS 16.1
This is tvOS 17.0
textView.backgroundColor = SKColor.white
textView.isScrollEnabled = true
textView.clipsToBounds = true
textView.layer.cornerRadius = 20.0
textView.textColor = SKColor.black
textView.isUserInteractionEnabled = true;
textView.isScrollEnabled = true;
textView.showsVerticalScrollIndicator = true;
textView.bounces = true;
textView.panGestureRecognizer.allowedTouchTypes = [NSNumber(value: UITouch.TouchType.indirect.rawValue)]
if let htmlPath = Bundle.main.url(forResource: NSLocalizedString("manual", tableName: nil, comment: ""), withExtension: "html") {
do {
let attributedStringWithHtml:NSAttributedString = try NSAttributedString(
url: htmlPath,
options: [.documentType: NSAttributedString.DocumentType.html],
documentAttributes: nil
)
self.textView.attributedText = attributedStringWithHtml
} catch {
print("Error loading text")
}
}
When attempting to load an mlmodel and run it on the CPU/GPU by passing the ComputeUnit you'd like to use when creating the model with:
model = ct.models.MLModel('mymodel.mlmodel', ct.ComputeUnit.CPU_ONLY)
Documentation for coremltools v7.0 says:
compute_units: coremltools.ComputeUnit
coremltools.ComputeUnit.ALL: Use all compute units available, including the neural engine.
coremltools.ComputeUnit.CPU_ONLY: Limit the model to only use the CPU.
coremltools.ComputeUnit.CPU_AND_GPU: Use both the CPU and GPU, but not the neural engine.
coremltools.ComputeUnit.CPU_AND_NE: Use both the CPU and neural engine, but not the GPU. Available only for macOS >= 13.0.
coremltools 7.0 (and previous versions I've tried) now seems to ignore that hint and only runs my models on the ANE. Same model when loaded into XCode and run a perf test with cpu only runs happily on the CPU and selected in Xcode performance tool.
Is there a way in python to get our models to run on different compute units?
I'm trying to debug a problem that's affecting customers who have upgraded to WatchOS 10, and I'm unable to get any console output from the watch when I debug the watch app in Xcode, or from the console app connecting from my Mac.
The other weird thing is that my watch shows up twice in the device list in the console app.
Is this a known issue?
Xcode 15 and iOS 17.0.2 causing debugging issues when running from Xcode using a cable. When I updated to the new Xcode 15 and device to iOS version to 17.0.2, it is taking a long delay of 1 to 3 minutes to launch the app in the real device. It is also really slow after launching. Every step over or into take almost a minute.
I can see the below warning in console "warning: libobjc.A.dylib is being read from process memory. This indicates that LLDB could not find the on-disk shared cache for this device. This will likely reduce debugging performance."
I tried with the fix of executing the following command to clear the Device support files
rm -r ~/Library/Developer/Xcode/iOS\ DeviceSupport.
But even after I am facing the same issue.
Please do the needful to fix this issue.
As of iOS 17 SFSpeechRecognizer.isAvailable returns true, even when recognition tasks cannot be fulfilled and immediately fail with error “Siri and Dictation are disabled”.
The same speech recognition code works as expected in iOS 16.
In iOS 16, neither Siri or Dictation needed to be enabled to have SpeechRecognition to be available and it works as expected. In the past, once permissions given, only an active network connection is required to have functional SpeechRecognition.
There seems to be 2 issues in play:
In iOS 17, SFSpeechRecognizer.isAvailable incorrectly returns true, when it can’t fulfil requests.
In iOS 17 dictation or Siri being enabled is required to handle SpeechRecognition tasks, while in iOS 17 this isn’t the case.
If issue 2. Is expected behaviour (I surely hope not), there is no way to actually query if Siri or dictation is enabled to properly handle those cases in code and inform the user why speech recognition doesn’t work.
Expected behaviour:
Speech recognition is available when Siri and dictation is disabled
SFSpeechRecognizer.isAvailable returns correctly false when no SpeechRecognition requests can be handled.
iOS Version 17.0 (21A329)
Xcode Version 15.0 (15A240d)
Anyone else experiencing the same issues or have a solution?
Reported this to Apple as well -> FB13235751
Hello everyone!
I'm currently working on an iOS app developed with Swift that involves connecting to a specific ble (Bluetooth Low Energy) device and exchanging data even when the app is terminated or running in the background.
I'm trying to figure out a way to wake up my application when a specific Bluetooth device(uuid is known) is visible and then connect to it and exchange data.
Is this functionality achievable?
Thank you in advance for your help!
I'd like to be able to associate some data with each CapturedRoom scan and maintain those associations when CapturedRooms are combined in a CapturedStructure.
For example, in the delegate method captureView(didPresent:error:), I'd like to associate external data with the CapturedRoom. That's easy enough to do with a Swift dictionary, using the CapturedRoom's identifier as the key to the associated data.
However, when I assemble a list of CapturedRooms into a CapturedStructure using StructureBuilder.init(from:), the rooms in the output CapturedStructure have different identifiers so their associations to the external data are lost.
Is there any way to track or identify CapturedRoom objects that are input into a StructureBuilder to the rooms in the CapturedStructure output? I looked for something like a "userdata" property on a CapturedRoom that might be preserved, but couldn't find one. And since the room identifiers change when they are built into a CapturedStructure, I don't see an obvious way to do this.
I'm working on a game which uses HDR display output for a much brighter range.
One of a feature of the game is the ability to export in-game photos. The only appropriate format I found for this is Open EXR.
The embedded Photos app is capable of showing HDR photos on an HDR display.
However, if drop an EXR file to the photos with a large range, it won't be properly displayed with HDR mode with the full range. At the same time, pressing Edit on the file makes it HDR displayable and it remains displayable if save the edit with any, even a tiny, change.
Moreover, if the EXR file is placed next to 'true' HDR one (or an EXR 'fixed' as on above), then durring scroll between the files, the broken EXR magically fixes at the exact moment the other HDR drives up to the screen.
I tested on different files with various internal format. Seems to be a coomon problem for all.
Tested on the latest iOS 17.0.3.
Thank you in advance.
I am encountering an intermittent issue with WKWebView in my iOS app. The problem occurs infrequently, but when it does, the WKWebView consistently displays a white screen and remains in this state until the app is forcefully terminated and relaunched.
To provide more context, here are the key characteristics of the issue:
The white screen problem occurs sporadically and is not easily reproducible.
The WKWebView remains unresponsive despite attempts to interact with it.
Reloading the webpage or navigating to a different URL does not resolve the white screen issue.
The problem persists until the app is terminated and relaunched.
This issue is specific to the WKWebView; other components of the app function correctly.
The WKWebView renders normally, and the main document synchronously loads resources both offline and online without any issues. The bridge and JavaScript execution also work as expected.
However, when interacting with the WKWebView, it becomes unresponsive to user clicks, and the web inspector fails to respond. Additionally, asynchronous network requests also do not receive any response.
The problem occurs exclusively on HTTPS pages, whereas HTTP pages load without any issues. Other components, such as workers, function correctly.
addUserScript injection during WKWebView creation is effective, and evaluateJavaScript during the page loading process works as expected. However, when the document becomes unresponsive, executing evaluateJavaScript only triggers the callback after the WKWebView is destroyed.
I have discovered a reliable method to reproduce the white screen issue in WKWebView. This method involves the following steps and conditions:
Create a WKWebView instance.
Load an HTML page using the loadRequest method(https url request).
Before the WKWebView is attached to the UI (not yet visible to the user), call the evaluateJavaScript function.
This issue has occurred in almost all iOS versions, including the latest iOS 17.x version.
In the context of a system utility that reports OS stats periodically, the security type for a connected WiFi network could be obtained with Core WLAN via CWInterface.security.
This used to work up to Ventur; howver, after upgrading to Sonoma, cwInterface.security now returns kCWSecurityUnknown.
In other posts, I have read about changes in how Core WLAN works which are related to Sonoma.
How can I determine the security type for a connected WiFi network on Sonoma? It would be preferable if the suggested approach would also work on previous macOS versions as well.
Many thanks in advance! :-)