Sound and Haptics

RSS for tag

Create meaningful and delightful experiences that engage a wider range of senses with sound and haptic design.

Posts under Sound and Haptics tag

25 Posts
Sort by:
Post not yet marked as solved
1 Replies
563 Views
Hi everyone. This is my first post on these forums, because this is first time I have not found any clues on internet regarding this question. I need to play sound (text to speech generated by AVSpeechSynthesizer) using a built-in speaker on Apple Watch Series 4+ (WatchOS 6.2.6+). I was able to make it work for Series 3. However, for series 4, it doesn't play via speakers, it plays via Bluetooth AirPods if they are connected, but they are not – there is no sound at all. Is it even possible to do? Any advice would be highly appreciated.
Posted
by
Post marked as solved
1 Replies
691 Views
My app is using RemoteIO to record audio. It doesn’t do any playback. RemoteIO seems to be broadly compatible with the new Sound Recognition feature in iOS14, but I’m seeing a glitch when sound recognition is first enabled. If my app is started and I initialise RemoteIO, and then turn on Sound Recognition (say via control centre), the RemoteIO input callback is not called thereafter, until I tear down the audio unit and set it back up. So something like the following: Launch app RemoteIO is initialised and working, can record Turn on Sound Recognition via Settings or control centre widget Start recording with already-set up RemoteIO Recording callback is never again called Though no input callbacks are seen, kAudioOutputUnitProperty_IsRunning is reported as true, so the audio unit thinks it is active Tear down audio unit Set up audio unit again Recording works Buffer size is changed, reflecting some effect on the audio session of the Sound Recognition feature I also noticed that when Sound Recognition is enabled, I see several (usually 3) AVAudioSession.routeChangeNotifications in quick succession. When Sound Recognition is disabled while RemoteIO is set up, I don’t see this problem. I’m allocating my own buffers so it’s not a problem with their size. What could be going on here? Am I not handling a route change properly? There doesn’t seem to be a reliable sequence of events I can catch to know when to reset the audio unit. The only fix I’ve found here is to hack in a timer that checks for callback activity shortly after starting recording, and resets the audio unit if no callback activity is seen. Better than nothing, but not super reliable.
Post not yet marked as solved
1 Replies
818 Views
Is there support for playing custom haptic files to the grips of Sony's DualSense Game Controller? The video mentions supporting the adaptive triggers on the DualSense. We are not finding a way to reference the haptic actuators in the grips. We'd like to play custom haptic files to them as well as audio to the speaker inside the DualSense. Searched the WWDC21 content as well as the documentation. Have not found the answer yet. Thank you!
Posted
by
Post not yet marked as solved
0 Replies
476 Views
I have experienced some issue with ios 14.6 on my iphone 7 plus. Issues are : system hang! sometime i have experienced on fluent running the apps and phone unlock. Apps get hung during screening and if i get to lock and unlock again it's working back. sound is also low then before updating ios. I mean loud speaker is not heard as clear and high as before updating to ios 14.6 ringtone volume is not heard as high as before.
Posted
by
Post not yet marked as solved
2 Replies
563 Views
Hi! We're currently building a web extension with iOS as a primary target. Since haptics on iOS are really nice, we wanted to make use of it to give the experience a special feel on iPhone. However, haptics don't seem to work when called from SafariWebExtensionHandler... maybe because the messaging layer between the web extension and the native code is in the background (is it?), and haptics don't work in apps that are in the background. Anyway, is there any way we can make haptics work regardless?
Post not yet marked as solved
1 Replies
401 Views
Hey everyone, I am a naive in developing of apps but recently created a basic app which asks about coding questions by a quiz taking 4 MCQ as options. So the problem encountered is that whenever a person selects a wrong answer from those MCQ choices, I want the haptic engine on the iPhone to vibrate i.e. to create a feedback so if anyone knows how to do it please give me the guidance and the steps :)
Posted
by
Post not yet marked as solved
0 Replies
319 Views
Will there be more background sounds that will be added? I have a few more suggestions such as: Rain with Thunder Light & Heavy Rain Fall Restaurant Customers talking Forest Surroundings I feel like those will be great additions towards the Background sounds and how to improve it well!
Posted
by
Post not yet marked as solved
0 Replies
368 Views
Hi, what is the best approach of customising the sound (notification) and haptic feedback on apple watch 6 and later? I wonder if I can play a sound/haptic feedback when receiving a remote (can be silent?) notification to apple watch? Thanks
Posted
by
Post not yet marked as solved
1 Replies
335 Views
I've got volume in my implementation. Too much hurricane force volume. Though, the consistent problem is the volume is blasting when I create a ambient or channel mixer. (not a point or volumetric source. eg. calm breeze sound) I set the level on the mixer and nothing seems to happen. I'd like to set the volume lower. Though, on the spatial mixer, if I set the gain, rolloff and direct path level on the source node ( a point or volumetric source), then the spatial mixer case appears to work and no blasting audio. I've been following the wwdc examples. ( watched it about 4 times now) It appears I should not use the source node with the ambient and channel mixers? That seems to be only an option adding the parameter to the spatial mixer. The ambient mixer seems to only want the listener and a quaternion direction. ( I normalized to 1 ) If I set the calibration to relative spl on the sampler node but that always seems to cause blasting audio. I added the sound assets with dynamic using wav format at 32 bits and 44.1 khz. Also, are there any examples of the meta parameters? Is that how I could dynamically adjust the level? Think there was a passing reference to it in the wwdc video. Any pointers would be appreciated. I wonder if I'm making consistent assumptions on how phase works. I try to set up as much as possible before I start the engine. ( especially adding children nodes. )
Posted
by
Post not yet marked as solved
0 Replies
243 Views
We want to use Apple Pay sound effect in one of the commercial App my team is developing. It is an investment app and the sound will play when there is a successful transaction as a notification to the User. I wanted to know whether it is allowed or are there any licensing/legal issues. Appreciate your responses.
Posted
by
Post not yet marked as solved
0 Replies
215 Views
We are using the following code to check for the haptics support on the device import CoreHaptics let hapticCapability = CHHapticEngine.capabilitiesForHardware() let supportsHaptics: Bool = hapticCapability.supportsHaptics For some reason it returns false on iPhone 7 not only on my personal device (iOS 13) but some of our users reported the haptic settings missing from our app on iPhones 7 (iOS 15) as well. I have just checked with this private api call that in fact the devices does support haptics (obviously, it has not physical home button) extension UIDevice {     var feedbackSupportLevel: Int? {         value(forKey: "_feedbackSupportLevel") as? Int     } } let supportsHaptics: Bool = UIDevice.current.feedbackSupportLevel == 2 And it returns true, as expected. Weird thing is that the haptics work perfectly fine, it's just the haptics support status is incorrect. So my question is: will we be rejected for checking the value of the _feedbackSupportLevel on UIDevice just to make sure we are correctly displaying available system features for our users on all of the devices?
Posted
by
Post not yet marked as solved
0 Replies
194 Views
I'm working on an app that uses CoreHaptics to play a synchronised pattern of vibrations and audio. The problem is that the audio only gets played through the iPhones speakers (if the mute switch is not turned on). As soon as I connect my AirPods to the phone the audio stops playing, but the haptics continue. My code looks something like this: let engine = CHHapticEngine() ... var events = [CHHapticEvent]() ... let volume: Float = 1 let decay: Float = 0.5 let sustained: Float = 0.5 let audioParameters = [ CHHapticEventParameter(parameterID: .audioVolume, value: volume), CHHapticEventParameter(parameterID: .decayTime, value: decay), CHHapticEventParameter(parameterID: .sustained, value: sustained) ] let breathingTimes = pacer.breathingTimeInSeconds let combinedTimes = breathingTimes.inhale + breathingTimes.exhale let audioEvent = CHHapticEvent( audioResourceID: selectedAudio, parameters: audioParameters, relativeTime: 0, duration: combinedTimes ) events.append(audioEvent) ... let pattern = try CHHapticPattern(events: events, parameterCurves: []) let player = try engine.makeAdvancedPlayer(with: pattern) ... try player.start(atTime: CHHapticTimeImmediate) My idea to activate an audio session before the player starts, to indicate to the system that audio is played, also didn't changed the outcome: try AVAudioSession.sharedInstance().setActive(true) Is there a different way to route the audio from CoreHaptics to a different output other than the integrated speakers?
Posted
by