I was able to confirm with a customer of mine that calling copyfile with a source file that is a symbolic link on a NTFS partition always causes the error
NSPOSIXErrorDomain 12 Cannot allocate memory
They use NTFS drivers from Paragon.
They tried copying a symbolic link from NTFS to both APFS and NTFS with the same result. Is this an issue with macOS, or with the NTFS driver?
Copying regular files on the other hand always works. Copying manually from the Finder also seems to always work, both with regular files and symbolic links, so I'm wondering how the Finder does it.
Here is the sample app that they used to reproduce the issue. The first open panel allows to select the source directory and the second one the destination directory. The variable filename holds the name of the symbolic link to be copied from the source to the destination. Apparently it's not possible to select a symbolic link directly in NSOpenPanel, as it always resolves to the linked file.
@main
class AppDelegate: NSObject, NSApplicationDelegate {
func applicationDidFinishLaunching(_ notification: Notification) {
let openPanel = NSOpenPanel()
openPanel.canChooseDirectories = true
openPanel.canChooseFiles = false
openPanel.runModal()
let filename = "Modules"
let source = openPanel.urls[0].appendingPathComponent(filename)
openPanel.runModal()
let destination = openPanel.urls[0].appendingPathComponent(filename)
do {
let state = copyfile_state_alloc()
defer {
copyfile_state_free(state)
}
var bsize = UInt32(16_777_216)
if copyfile_state_set(state, UInt32(COPYFILE_STATE_BSIZE), &bsize) != 0 {
throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno))
}
if copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CB), unsafeBitCast(copyfileCallback, to: UnsafeRawPointer.self)) != 0 || copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CTX), unsafeBitCast(self, to: UnsafeRawPointer.self)) != 0 || copyfile(source.path, destination.path, state, copyfile_flags_t(COPYFILE_NOFOLLOW)) != 0 {
throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno))
}
} catch {
let error = error as NSError
let alert = NSAlert()
alert.messageText = "\(error.localizedDescription)\n\(error.domain) \(error.code)"
alert.runModal()
}
}
private let copyfileCallback: copyfile_callback_t = { what, stage, state, src, dst, ctx in
if what == COPYFILE_COPY_DATA {
if stage == COPYFILE_ERR {
return COPYFILE_QUIT
}
var size: off_t = 0
copyfile_state_get(state, UInt32(COPYFILE_STATE_COPIED), &size)
}
return COPYFILE_CONTINUE
}
}
Overview
Post
Replies
Boosts
Views
Activity
Hi all,
I‘m using the certificates API in order to create a development certificate. I want to create a Jenkins job that will give employees an option to create a certificate without giving them admin rights.
I’m creating a new certificate without any issues.
When I try to create another certificate with a different CSR (for a diff user) I get an error that a certificate already exists.
Is it limited to create only one certificate per API key??
Thanks!
Hello,
I am currently working on an app that features multiple environments in which I combine Reality Composer Pro scenes with objects managed at runtime as well as make heavy use of RealityView attachments that modify the appearance of certain objects. Is it possible to keep track of an AR anchor when transitioning between immersive spaces?
About my app:
There are two main contexts/scenes in the app that the user progresses through. The first takes place in AR and is non-interactive and driven by a timeline animation. The second is in VR and allows the user to change materials of select models. Both scenes need to be placed relative to a real-life object that functions as an image anchor. Anchoring is necessary for visual purposes in AR context and it would be nice to use it in the VR context as well in order to provide passive haptics to the user.
If the user doesn't have access to the physical object, we make use of plane-based anchoring. Either way, we would like to keep the anchor's position across the scenes.
Hello - Asking a question that's probably been asked, and maybe a couple more.
I bought a membership yesterday and received the email, I'm assuming it's "approved", but when I log in on developer account it says "PENDING".
I will be the only user on my account. I already have the app, it's been tested by another party and all I need do is make some minor changes. So that's done, now I need to create a "debug or release testing distribution", but when I do that I get this error in xCode. Is this normal...until my status changes from "PENDING" to (guessing) APPROVED?
And one further question. I am the only developer, and this app will only be used by one person as it's specialized for a single purpose. What are the option for that person to install the app? Is there a special section on the app store for such an app?
Thanks,
Gary
I can't shake the "I don't think I did this correctly" feeling about a change I'm making for Image Playground support.
When you create an image via an Image Playground sheet it returns a URL pointing to where the image is temporarily stored. Just like the Image Playground app I want the user to be able to decide to edit that image more.
The Image Playground sheet lets you pass in a source URL for an image to start with, which is perfect because I could pass in the URL of that temp image.
But the URL is NOT optional. So what do I populate it with when the user is starting from scratch?
A friendly AI told me to use URL(string: "")! but that crashes when it gets forced unwrapped.
URL(string: "about:blank")! seems to work in that it is ignored (and doesn't crash) when I have the user create the initial image (that shouldn't have a source image).
This feels super clunky to me. Am I overlooking something?
Hi all,
For context, the Family Controls entitlement request (for the Personal Device Management category/individual use case) includes the question:
Will your app share device or usage data beyond the individual for the individual use case, or Family Sharing for the parent/guardian use case, including through means such as screenshots, screen recordings, or server logging?
I'm looking for clarification on how to interpret this. I originally answered Yes and was rejected, then later answered No and was accepted.
Ideally, I would like my screen time management app to allow users to opt-in to social features. One simple example is opting into a leaderboard with your friends for who has the lowest screen time.
If the user installed this app for themself and chooses to share this basic data with their friends, it sounds like an ethical and unproblematic feature but I suppose storing that data would fall under "server logging"?
If anyone has any experience with this, I would appreciate a more explicit description of the requirement above. Is what I described allowed?
Thanks for reading!
I'm building a custom camera screen that displays the camera image on a preview layer and then captures an image, using AVCaptureSession. When the picture is captured, I immediately load it into a UIImageView in order to display it to the user for approval.
I've actually done this many times before, but this is the first time I've tried to do it in an app that supports interface rotation. If I hold the phone in Portrait mode and capture a picture, everything works as expected.
When the user rotates the phone into Landscape orientation, I detect this and I replace the preview layer (AVCaptureVideoPreviewLayer) with a new one, specifying connection.videoRotationAngle in order to make the image appear in the right orientation. I'm a little surprised that this is necessary, and it's not a smooth transition, but that doesn't matter.
What does matter is that when I capture the image, it is in the wrong orientation. I tried rotating it myself, but this doesn't seem to make any difference. What am I doing wrong?
I am running Xcode 16.1, macOS 15.1 , iOS 18.1, and I see the error when trying to run the Instruments Network Profile
Description
As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency.
Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced.
Steps to Reproduce:
Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center).
Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug)
Build and install the example project
Attach a headset and launch the application
Observe console logs showing
a requested buffer size of 0.005805 (256 samples @ 48k)
an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests)
Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback)
Launch the application
Observe
Same result from step 4
Mismatched hardware buffer size of 1104 and recorded frame count of 1024
Mismatched playbackCount and recordCount
Quit the app and disable vocal shortcuts
Launch the app
Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior)
Expected results:
Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced
Input and output buffer sizes match
Device(s): iPhone 11 Pro, iPad Pro
OS: iOS 18.0.1
Environment: Xcode 16.1
FB: FB15715421
Related to: https://forums.developer.apple.com/forums/thread/765477
For an upcoming update of one of my apps, I’m facing an issue:
The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch.
However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32).
Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down??
I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result.
Grateful for any input on this 🙏
Hi, wanted to test if possible to use Mesa3D Dozen driver(Vulkan on D3D12 )+D3DMetal 2b3 to get maybe better Vulkan driver on Wine than default MoltenVK.. this will support Vulkan windows apps via using D3D12Metal..
using vulkan_dzn.dll,dzn_icd.x86_64.json,dxil.dll from x64 folder from: https://github.com/pal1000/mesa-dist-win/releases/download/24.3.0-rc1/mesa3d-24.3.0-rc1-release-msvc.7z
using simple vulkaninfo app and running like:
wine64 vulkaninfo
I get error:
[D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53
also seems D3DMetal Wine integration on Whisky doesn't expose d3d12core.dll and d3d12.dll like new Agility D3D12 dlls or VKD3D, so
getting:
MESA: error: Failed to retrieve D3D12GetInterface MESA: error: Failed to load DXCore
but anyways seems to try to load the driver as:
WARNING: dzn is not a conformant Vulkan implementation, testing use only.
full log:
MESA: error: Failed to retrieve D3D12GetInterface MESA: error: Failed to load DXCore WARNING: dzn is not a conformant Vulkan implementation, testing use only. [D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53 00bc:fixme:dcomp:DCompositionCreateDevice 0000000000000000, {c37ea93a-e7aa-450d-b16f-9746cb0407f3}, 000000000011E328. MESA: error: Failed to load DXCore WARNING: dzn is not a conformant Vulkan implementation, testing use only. [D3DMetal:LOG:2A825] Unsupported API: CheckFeatureSupport, unhandled support query 53 00bc:fixme:dcomp:DCompositionCreateDevice 0000000000000000, {c37ea93a-e7aa-450d-b16f-9746cb0407f3}, 000000000011E578. ERROR: [Loader Message] Code 0 : setup_loader_term_phys_devs: Call to 'vkEnumeratePhysicalDevices' in ICD c:\windows\system32\.\vulkan_dzn.dll failed with error code -3 ERROR: [Loader Message] Code 0 : setup_loader_term_phys_devs: Failed to detect any valid GPUs in the current config ERROR at C:\j\msdk0\build\Khronos-Tools\repo\vulkaninfo\vulkaninfo.h:241:vkEnumeratePhysicalDevices failed with ERROR_INITIALIZATION_FAILED
Hi all,
Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described.
The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs.
These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded().
Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described.
Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
iPhone 15pro iOS 18.2
Downloaded files cannot be located anywhere in Files, only by accessing Downloads in Safari. I have tried setting download folder to various locations, iCloud, Phone, Google Disk, but nothing is stored.
Has an invisible cache or temp folder been introduced? If so, it is a total fail:
When press-holding any file in Safari download, the normal file action options (Quick Look, share, store to Files, etc) are not available.
When clicking any file it opens any of several apps that has this file type associated with it, and there is no way to change the default app or disable the forced opening of an app.
I tried deleting the app opening .csv (in this case OneDrive), and another irrelevant app opened. There seems to be a hierarchy of apps-file types, and it has no logic to it.
in Chrome behaviour is as expected.
Chrome vs. Safari screen recordings: https://shorturl.at/my3Oy
I'm trying to develop an app to control HomeKit accessories in a way not available so far in macOS. How can I work around the absence of HomeKit Support for macOS in Xcode 16?
Thanks!
Can someone give me an easy way to fix this code?
What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
Hello,
We are developing Apple pay with In-app provisioning, we are an agent of payment for a bank in France. We have already configured all the entitlements, and we think we have correctly configured things with our PNO
It's a react native app and we are able to follow the flow that opens the native dialogue, show the card, then retrieve the Terms & Conditions but when we click accept we get the error: "Could not add card"
we're experiencing exactly the same problem described in this other thread: https://forums.developer.apple.com/forums/thread/765832
In the logs we see the response:
{
errorCode = 40456;
statusCode = 500;
}
Our PNO swears they have configured their system in the right way. They only have 3 fields that they have configured as follows:
associated application identifier: {team_id}.{bundle_id}
associated store identifier: {adam_id}
application launch url: {app_name}://
Are these correct?
Attaching the details of a request and response
request.txt
response.txt
We reached out to the Apple Pay team who said we should talk to Technical support.
Can somebody help us please? We are stuck at the moment and we really need to make this work. We can provide all the details of the syslog output if needed
many thanks in advance!
Thomas
I have a Safari extension which allows the user to load their own homepage upon opening a new tab. The extension works by retrieving the homepage URL from UserDefaults and then redirecting to it using window.location.replace. In iOS 18, if the homepage is unable to be loaded due to, for example the user having no internet connection, Safari will go into an rapid loading loop, which eventually stops after a while. This is unlike iOS 17, where trying to reproduce the same scenario will end up with a Safari error page, which should be the expected behaviour.
In short, instead of Safari going into an infinite loading loop, it should display a Safari error page like iOS 17 does.
As this issue is only happening on iOS 18, I am almost certain it's an iOS bug and would appreciate if this can be fixed as soon as possible.
I have created a Feedback Assistant report with ID FB15853821, which contains a sysdiagnose file from my iPhone 16 Pro, as well as two videos, one from my iPhone 16 Pro with iOS 18.2 beta 3 and the other video showing a comparison between iOS 18 and 17. Both videos first show the extension functioning correctly with an active internet connection, but when I disable my internet, the iOS 18 Safari goes into an infinite loop.
Here are the steps to reproduce the issue:
Download the Homepage for Safari app from the App Store: https://apps.apple.com/gb/app/homepage-for-safari/id6481118559
Enter any valid homepage URL, such as https://apple.com and tap Save
Go to Settings -> Apps -> Safari -> Extensions -> Homepage and enable the extension
Make sure Open New Tabs is set to “Homepage”
Turn off both WiFi and cellular data and attempt to open a new tab in Safari
Please note that this also happens with iOS 18.2 beta 4.
If I have a Catalyst app with a WKWebView and I select text, I can drag forward to extend the selection, but I can't reduce the length of the selected range by dragging backwards. I've reproduced this in a trivial sample app.
Is there some property I need to set somewhere?
I've filed this as FB15645411.