Search results for

Popping Sound

19,350 results found

Post

Replies

Boosts

Views

Activity

Reply to restore root file with tmutil
So, there are actually a few different things at work here. First of the errors here: /usr/local/mnt: No such file or directory ...means exactly what it sounds like. You specified the mount target (the point the new file system would attach on your file system) as /usr/local/mnt/ and that directory either does not exist or you don't have permission to it. That's what I'd expect, as that's not a standard macOS directory, so it won't exist unless you create it. The solution is to create the directory, change the permissions, or pick a new directory. I'm not familiar with how Time Machine on a NAS works I believe we still create disk images, which are then mounted and treated as local backup target volumes. That means there are actually two permission systems at work here: The permission of the smb volume the NAS device shares. The permission of the TimeMachine disk image. That difference is important, because it leads to the difference in behavior between this case: This was on a USB drive mounted with
Topic: App & System Services SubTopic: Core OS Tags:
Jul ’25
Reply to Seeking Confirmation on Picture-in-Picture Support for Audio Calls
Thank you for your response, Kevin. From what I can see, the document appears to focus only on video calls, while my use case involves audio-only communication. Could you kindly confirm whether it’s acceptable to display a Picture-in-Picture view for an audio-only call, even if the document primarily focuses on video calls? What are you actually trying to do? While the documentation primarily describes camera to camera streaming and the idea of two people looking at each other through their camera's, nothing in the API actually requires that. As far as the API concerned, it has no idea what content it's actually showing and will happily display whatever you tell it to. Similarly, many VoIP apps allow their users to send content other than the image of the person currently talking (screen captures, shared documents, white boards, etc.). I'm not aware of us ever having had any issue with that sort of thing, particularly when the user has direct control of what's happening and the value to the
Jul ’25
Received the "Profile for Dolby Vision MUST be Profile 5 or 10" error for the channel that has only dolby vision profile 5
While validating a Dolby Vision Profile 5 playlist in CMAF format (with segments in MP4), the Media Stream Validator reported the following error in the MUXT-FIX-ISSUES list: However, the playlist correctly specifies Dolby Vision Profile 5 in both the EXT-X-STREAM-INF and EXT-X-I-STREAM-INF tags. Playlist: #EXTM3U #EXT-X-VERSION:8 #EXT-X-INDEPENDENT-SEGMENTS #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID=audio-ec3,LANGUAGE=und,NAME=Undetermined,AUTOSELECT=YES,CHANNELS=6,URI=var14711339/aud1257/playlist.m3u8?device_profile=hls&seg_size=6&cmaf=1 #EXT-X-STREAM-INF:BANDWIDTH=14680000,AVERAGE-BANDWIDTH=14676380,VIDEO-RANGE=PQ,CODECS=ec-3,dvh1.05.06,RESOLUTION=3840x2160,AUDIO=audio-ec3 var14711339/vid/playlist.m3u8?device_profile=hls&seg_size=6&cmaf=1 #EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=1419881,URI=trk14711339/playlist.m3u8?device_profile=hls&cmaf=1,VIDEO-RANGE=PQ,CODECS=dvh1.05.06,RESOLUTION=3840x2160 Could you please review this and clarify: Why is the Media Stream Valida
1
0
77
Jul ’25
Reply to PushKit with CallKit - CallKit not shown when app is in background or terminated
I am developing VOIP feature using PushKit and CallKit but CallKit is not showing when app in background or terminate state, now in foreground state I can call reportNewIncomingCall from pushRegistry-didReceiveIncomingPushWith and it's working as expected but the problem is in background or terminate state it's not So, there are a few things that can happen here: If you've just gotten this working, it's pretty common that you failed the call report requirement enough times that the system stopped delivering new pushes. To reset that count, delete the app completely, turn the device off, turn it back on, then reinstall. Note that while the full restart is not specifically required, I recommend doing it any time you need to be SURE things have reset properly. Your app should include the audio background category as well as voip. There are weird entanglements between CallKit and the audio system that make CallKit work even without audio, however, that behavior isn't really intentional
Jul ’25
Is it possible to programmatically set macOS notification preferences for an app in Swift?
Hi, I’m working on a Safari extension for macOS, and I’d like the app to use specific system notification settings right after installation. I’m wondering if there’s a way in Swift to programmatically configure the default notification preferences (as seen in System Settings > Notifications > [my app]). Here are the desired settings: Only Desktop – without “Notification Center” or “Lock Screen” Alert Style: Temporary Badge App Icon: Enabled Play Sound for Notifications: Disabled Show Previews: When Unlocked Notification Grouping: Off (I don’t want them to accumulate in Notification Center) Here is the code I’m currently using to display a basic notification: private func handleNotificationRequest(_ message: [String: Any]) { guard let title = message[title] as? String, let body = message[body] as? String else { return } UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .badge, .sound]) { granted, error in if granted { self.showNotification(title: title, body: bod
1
0
404
Jul ’25
Reply to SMAppService daemon not running
It sounds like your launchd job is failing to start. There are two common reasons for that: launchd tries to start it and it crashes. launchd is unable to start it. A good place to… ahem… start is to look for a crash report. In the first case, and many times in the second case as well, a failure will generate a crash report. If that doesn’t show up anything, add a ‘first light’ log point to your job [1]. Then try to communicate with the named XPC endpoint. That’ll divide the problem in half: If you see a log entry, you know that your job started and you can debug your code. If not, you know that your code never got running and you can debug your configuration. Share and Enjoy — Quinn “The Eskimo!” @ Developer Technical Support @ Apple let myEmail = eskimo + 1 + @ + apple.com [1] I explain this ‘first light’ concept in Debugging a Network Extension Provider.
Jul ’25
Logic Pro cannot load v3 audio unit with framework compiled with Swift 6
Sequoia 15.4.1 (24E263) XCode: 16.3 (16E140) Logic Pro: 11.2.1 I’ve been developing a complex audio unit for Mac OS that works perfectly well in its own bespoke host app and is now well into its beta testing stage. It did take some effort to get it to work well in Logic Pro however and all was fine and working well until: The AU part is an empty app extension with a framework containing its code. The framework contains Swift code for the UI and C code for the DSP parts. When the framework is compiled using the Swift 5 compiler the AU will run in Logic with no problems. (I should also mention that AU passes the most strict auval tests). But… when the framework is compiled with Swift 6 Logic Pro cannot load it. Logic displays a message saying the audio unit could not be loaded and to contact the developer. My own host app loads the AU perfectly well with the Swift 6 version, so I know there’s nothing wrong with the audio unit. I cannot find any differences in any of the built output f
1
0
308
Jul ’25
A Summary of the WWDC25 Group Lab - Apple Intelligence
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Apple Intelligence. Can I integrate writing tools in my own text editor? UITextView, NSTextView, and SwiftUI TextEditor automatically get Writing Tools on devices that support Apple Intelligence. For custom text editors, check out Enhancing your custom text engine with Writing Tools. Given that Foundation Models are on-device, how will Apple update the models over time? And how should we test our app against the model updates? Model updates are in sync with OS updates. As for testing with updated models, watch our WWDC session about prompt engineering and safety, and read the Human Interface Guidelines to understand best practices in prompting the on-device model. What
2
0
239
Jul ’25
On Log Noise - Debugging
I came across several errors being reported when I run my app, however my app seems to function correctly. I believe they fall in the category listed on this ( now locked ) thread https://developer.apple.com/forums/thread/115461 However, I wanted to post the ones I found to clarify ( close to submission) just in case any of these end up being more than just log noise later. PLEASE let me know if you've come across these before and whether they impacted anything or if you can confirm they are just log noise. Thanks in advance! -[RTIInputSystemClient remoteTextInputSessionWithID:performInputOperation:] perform input operation requires a valid sessionID. inputModality = Keyboard, inputOperation = , customInfoType = UIEmojiSearchOperations AVAudioSession_iOS.mm:2,223 Server returned an error from destroySession:. Error Domain=NSCocoaErrorDomain Code=4099 “The connection to service with pid 102 named com.apple.audio.AudioSession was invalidated from this process.” UserInfo={NSDebugDescription=The connection to ser
10
0
2.9k
Jun ’25
Issue using Siphon Tap on input AudioQueue
Hi all, I've developed an audio DSP application in C++ using AudioToolbox and CoreAudio on MacOS 14.4.1 with Xcode 15. I use an AudioQueue for input and another for output. This works great. I'm now adding real-time audio analysis eg spectral analysis. I want this to run independently of my audio processing so it can not interfere with audio playback. Taps on AudioQueues seem to be a good way of doing this... Since the analytics won't modify the audio data, I am using a Siphon Tap by setting the AudioQueueProcessingTapFlags to kAudioQueueProcessingTap_PreEffects | kAudioQueueProcessingTap_Siphon; This works fine on my output queue. However, on my input queue the Tap callback is called once and then a EXC_BAD_ACCESS occurs - screen shot below. NB: I believe that a callback should only call AudioQueueProcessingTapGetSourceAudio when not using a Siphon, so I don't call it. Relevant code: AudioQueueProcessingTapCallback tap_callback) { // Makes an audio tap fo
1
0
71
Jun ’25