Search results for

Popping Sound

19,345 results found

Post

Replies

Boosts

Views

Activity

push notification-driven Live activity decoding fail
My start live activity CURL is not starting my live activity and I keep getting a decoding failure even though my curl matches my content state so my live activity is not starting. heres my CURL --header apns-topic: MuscleMemory.KimchiLabs.com.push-type.liveactivity --header apns-push-type: liveactivity --header apns-priority: 10 --header authorization: bearer eyJhbGciOiJFUzI1NiIsImtpZCI6IjI4MjVTNjNEV0IifQ.eyJpc3MiOiJMOTZYUlBCSzQ2IiwiaWF0IjoxNzU3NDYwMzQ2fQ.5TGvDRk5ZYLsvncjKwXIZYN78X88v5lCwX4fRvfl1QXjwv8tOtO2uoId27LQahXA3zqjruu_2YoOfqEtrppKXQ --data '{ aps: { timestamp: '$(date +%s)', event: start, content-state: { plain_text: hello world, userContentPage: [hello world] }, alert: { sound: chime.aiff } }, attributes-type: KimchiKit.DynamicRepAttributes, attributes: {} }' --http2 https://api.sandbox.push.apple.com/3/device/802fe7b4066e26b51ede7188a7077a9603507a0fa6ee8ffda946a864e75aa139602861538d6fb12100afbe9a3338d6c7c799d947dfacb2ee835f0339ecdc3165c9ed7e54839f5a3b89b76a011f5826cc and here is my co
1
0
215
2w
Reply to How do I use IOUserSCSIPeripheralDeviceType00?
First off, here: Here's my current personality dictionary. With this dictionary, the driver crashes, presumably in its init. One thing I strongly recommend during early bring-up is that your driver should do as LITTLE as possible. Basically, you can log data... and nothing else. I might not even call super. The problem here is that until you've got a foundation that works in the most BASIC sense (doesn't crash), you'll end up wasting time looking at what you THINK is the problem without actually knowing what's wrong. In any case, this is wrong: IOClass IOSCSIPeripheralDeviceType00 And should be: IOClass IOUserSCSIPeripheralDeviceType00 IOSCSIPeripheralDeviceType00 is the base class driver for mass storage devices. What you want is IOUserSCSIPeripheralDeviceType00, which is the DEXT support driver for SCSIPeripheralDriverKit. In more concrete terms, in the kernel IOUserSCSIPeripheralDeviceType00 is a subclass of IOSCSIPeripheralDeviceType00 which includes a bunch of additional callout hooks which call out to y
2w
MPNowPlayingInfoCenter playbackState fails to update after losing audio focus on macOS
My Environment: Device: Mac (Apple Silicon, arm64) OS: macOS 15.6.1 Description: I'm developing a music app and have encountered an issue where I cannot update the playbackState in MPNowPlayingInfoCenter after my app loses audio focus to another app. Even though my app correctly calls [MPNowPlayingInfoCenter defaultCenter].playbackState = .paused, the system's Now Playing UI (Control Center, Lock Screen, AirPods controls) does not reflect this change. The UI remains stuck until the app that currently holds audio focus also changes its playback state. I've observed this same behavior in other third-party music apps from the App Store, which suggests it might be a system-level issue. Steps to Reproduce: Use two most popular music apps in Chinese app Store (NeteaseCloud music and QQ music) (let's call them App A and App B): Start playback in App A. Start playback in App B. (App B now has audio focus, and App A is still playing). Attempt to pause App A via the system's Control Center or
0
0
107
2w
Reply to didRegisterForRemoteNotificationsWithDeviceToken() not called if requestAuthorization() is not called
This can be reproduced from scratch in just 2 minutes: create a brand new iOS app and give it the push notification capability. Change the app delegate template code to this: @main class AppDelegate: UIResponder, UIApplicationDelegate, UNUserNotificationCenterDelegate { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { UNUserNotificationCenter.current().delegate = self // UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .badge, .sound]) { granted, error in // NSLog(Notification permission granted: (granted)) // DispatchQueue.main.async { application.registerForRemoteNotifications() // } // } NSLog(didFinishLaunchingWithOptions returning) return true } // Called when an APNS token has been sucessfully obtained func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) { let token = deviceToken.map { String(format: %02.2hhx, $0)
2w
didRegisterForRemoteNotificationsWithDeviceToken() not called if requestAuthorization() is not called
If I run the following code in didFinishLaunchingWithOptions() UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .badge, .sound]) { granted, error in if granted { DispatchQueue.main.async { application.registerForRemoteNotifications() } } } Then the result is that didRegisterForRemoteNotificationsWithDeviceToken() gets called. However if I change the code to be just: DispatchQueue.main.async { application.registerForRemoteNotifications() } Or as as its already running on main in this scenario, then just application.registerForRemoteNotifications() Then didRegisterForRemoteNotificationsWithDeviceToken() does NOT get called, but also neither does didFailToRegisterForRemoteNotificationsWithError(). Obtaining a push token is supposed to be independent of the user granting notifications permissions, so why am I not observing that behavior? I only observe this behavior when running on hardware, when running on the simulator both forms of the code work. Yet its nothing to do with my
5
0
180
2w
Reply to How do I flatten a PDF using Shortcuts or Automator?
Hello, thank you for your response. I have a PDF with form field values. The PDF is essentially always the same, but the field values differ. Here is an image from the PDF. Date: for example, is always the same. To the right of it, the current date is placed as a form field value. As I hover my mouse over the date, the blue rectangle behind it appears indicating it's a field value and that it's somehow not baked into the PDF. I don't know that this is considered an annotation? After I have opened the PDF in Preview (a contract in this example) I attempt to annotate it by signing it with my digital signature by overlaying the signature block of the contract with a PNG file of my digital signature. When I attempt to annotate it, it says the PDF Is password protected (I assume that my ability to edit the actual date of 9/7/25 and other field values is why it's locked--which is fine--I don't want to edit existing values). But because the PDF is locked, I can't, for example, create an automation which annotates th
2w
Reply to Alarmkit custom sound plays once
On the latest Xcode beta 7, and iOS Beta 9 I cannot get the custom alarm sound to work no matter what I try. I have tried in the simulator and on a physical device. The my-sound variable points to a local file in the project, and I’ve confirmed that Bundle.main can locate it. What am I doing wrong? import Foundation import AlarmKit import SwiftUI struct DefaultAlarmMetadata: AlarmMetadata { let label: String } struct DoesNotWork: View { public func buildAlarmConfiguration() async throws -> AlarmManager.AlarmConfiguration { let stopButton = AlarmButton(text: Stop, textColor: .red, systemImageName: stop.circle) let alertContent = AlarmPresentation.Alert( title: Mocked Alarm, stopButton: stopButton ) let presentation = AlarmPresentation(alert: alertContent) let metadata = DefaultAlarmMetadata( label: My Mocked Label ) if let soundPath = Bundle.main.path(forResource: my-sound, ofType: caf) { print(Sound file found at: (soundPath)) } else { print(Sound file not found
2w
Reply to Bones/joints data issue - USD file export from Blender to RCP
Hello @Crixoule ! Hope this will be in VisionOS 26. Yes! In visionOS 26 we introduced attach(_ to:), which is the new, recommended way to attach an Entity to a GeometricPin on another Entity. So in your case, it sounds like you'd be calling attach on the entity you want to attach to Mesh01, and then pass in the Mesh01 entity as a parameter. I then need to manually add all the animations to the first file so I can build the library. Would be cool if the USD file could read the animation strips (NLA) and combining all animations in separate slots for RCP. I strongly recommend sending feedback about RealityKit's animation API including details of your use case using Feedback Assistant. I agree with you that the process of sequencing all your animations in Blender's NLA and then splitting them up in RCP with the AnimationLibraryComponent can be awkward, and your feedback and use case will help us improve this experience in the future.
2w
Reply to [iOS 26][SDK 26] Application never received providerDidBegin(_:) delegate
As you suggested,we have now initialised PushKit on the main queue. We are currently monitoring the issue since it is not consistently reproducible. Note that my concern here isn't really about using threads, it's about creating unnecessary race conditions. In concrete terms, it's common for a PushKit delegate to assume its CXProvider target exists (so it can call reportNewIncomingCallWithUUID), but if initialization is happening on a different thread, it could be possible for the PushKit initialization to get ahead, so the PushKit delegate fires before you've finished initializing CallKit. You can eliminate any possibility of that by following some simple guidelines: Use the same thread for CallKit and PushKit. I normally recommend the main thread, but that doesn't actually matter. Set up CallKit and PushKit on the same thread you'll use for the delegate and initialize all of them at the same time. This ensures that none of your delegates can fire until everything has been created. Set up CallKit first, then
Topic: App & System Services SubTopic: General Tags:
2w
Reply to How to delete PREFIX_NEW(team ID).com.mydom.myapp or TestFlight builds using it?
So, first terminology: Your Team ID is a 10 character code that identifies your team. For example, SKMME9E2Y8 is my individual Team ID. A bundle ID uniquely identifies your app, typically using reverse DNS notation. For example, com.example.test798928. An App ID is a bundle ID combined with an App ID prefix. For example, SKMME9E2Y8.com.example.test798928. An App ID prefix is either your Team ID or a unique App ID prefix. A unique App ID prefix is a 10 character code that’s allocated to your team, different from your Team ID. For example, one of my teams is allocated the App ID prefix of VYRRC68ZE6. App ID prefixes are effectively deprecated. If you previously used a unique App ID prefix for your app, you should be able to continue to use that same App ID prefix. Where is no requirement to migrate to using your Team ID [1]. Which brings us to this: [quote='799483021, ming2, /thread/799483, /profile/ming2'] But I cannot update it anymore because its AppID that appears in my account … is PREFIX_NEW(team ID).com.
2w
Reply to How to remove an app
You say half the features no longer work, which means the app still has some features that do work. You might have customers using those features, and if they don't require any services from your company and its servers then they can be left alone as they are. Or, you could update the app and slim it down so it contains only the features that still work. Yes, I see your app developer has left, but you can easily hire a short-term contractor to update your app to do this. You would also be able to pop up a splash screen explaining the changes. Most people have automatic app updates enabled so most of your users will be caught by this. This route would be a cleaner way to handle things, as the app wouldn't show features that don't work anymore, and users wouldn't have to skip round them while using the app. I guess it depends on whether you want to be nice to your existing customers, or don't really care. (I don't mean that in a bad way!)
2w
Play Audio and Recognize Speech in Car
Hello, I'm trying to determine the best/recommended AVAudioSession configuration (i.e category, mode, and options) for the following use-case. Essentially, I'd like to switch between periods of playing an audio file and then recognizing speech. The audio file is typically speech and I don't intend for playback and speech recognition to occur simultaneously. I'd like for the user to sill be able to interact with Siri and I'd like for it to work with CarPlay where navigation prompts can occur. I would assume the category to use is 'playAndRecord', but I'm not sure if it's better to just set that once for the entire lifecycle, or set to 'playback' for audio file playback and then switch to 'playAndRecord' for speech recognition . I'm also not sure on the best 'mode' and 'options' to set. Any suggestions would be appreciated. Thanks.
0
0
440
2w