I've got a problem with my app where I'm testing it on my own phone.
I'm using audio kit to generate tones as part of the app. Everything seems to work fine. Sounds start, Stop, etc. They play when the app is closed and when the phone is locked, so background is working.
However, I'm seeing an issue where, even when STOP is pressed and the application exited, if I get a notification such as a text message, the base tone for the app starts to play.
If I then open the app, check the Start/Stop button - it says start so that. hasnt' been activated. If I click Start, then a 2nd tone starts. This one stops with the Stop button. However the original tone that was set off by an incoming message carries on playing.
Until I go to the Open Apps View on the phone and slide the application upwards.
For the life of me, I can't figure out whats happening here.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to
int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2.
However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:
let compressedBuffer = AVAudioCompressedBuffer(
format: VoiceEncoder.Constants.networkFormat,
packetCapacity: 1,
maximumPacketSize: data.count
)
compressedBuffer.byteLength = UInt32(data.count)
compressedBuffer.packetCount = 1
compressedBuffer.packetDescriptions!
.pointee.mDataByteSize = UInt32(data.count)
data.copyBytes(
to: compressedBuffer.data
.assumingMemoryBound(to: UInt8.self),
count: data.count
)
where data: Data contains the raw OPUS frame to be decoded.
How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available?
More context:
I'm specifying the audio format like this:
static let frameSize: UInt32 = 960
static let sampleRate: Float64 = 48000.0
static var networkFormatStreamDescription =
AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatOpus,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: frameSize,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0
)
static let networkFormat =
AVAudioFormat(
streamDescription:
&networkFormatStreamDescription
)!
I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
Hello,
I am trying to follow the getting started guide. I have produced a developer token via the music kit embedding approach and can confirm I'm successfully authorized.
When I try to do play music, I'm unable to hear anything. Thought it could be some auto-play problems with the browser, but it doesn't appear to be related, as I can trigger play from a button with no further success.
const music = MusicKit.getInstance()
try {
await music.authorize() // successful
const result = await music.api.music(`/v1/catalog/gb/search`, {
term: 'Sound Travels',
types: 'albums',
})
await music.play()
} catch (error) {
console.error('play error', error) // ! No error triggered
}
I have searched the forum, have found similar queries but apparently none using V3 of the API.
Other potentially helpful information:
OS: macos 15.1 (24B83)
API version: V3
On localhost
Browser: Arc (chromium based), also tried on Safari,
The only difference between the two browsers is that safari appears to exit the breakpoint, whereas Arc will continue (without throwing any errors)
authorizationStatus: 3
Side note, any reason this is still in beta so many years later?
I’ve been researching how to achieve a recording playback effect in iOS similar to the hands-free calling effect in the system’s phone app. How can this be implemented? I tried using the voice chat recording method, but found that the volume of the speaker output is too low. How should this issue be addressed? I couldn’t find a suitable API. Could you provide me with some documentation or sample code? Thank you.
Hello,
I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak.
Here is a simplified version of the code I'm using:
// This function is called when the user taps a button in the view controller:
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
}
- (IBAction)myButtonAction:(id)sender {
NSLog(@"Test");
soundCreate();
}
@end
// media.m
static AVAudioEngine *audioEngine = nil;
void soundCreate(void)
{
if (audioEngine != nil)
return;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
[[AVAudioSession sharedInstance] setActive:YES error:nil];
audioEngine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init];
[audioEngine attachNode:playerNode];
[audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil];
[audioEngine startAndReturnError:nil];
}
In the memory leak report, the following call stack is repeated, seemingly in a loop:
ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore
ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore
AUListenerAddParameter AudioToolboxCore
addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore
0x180178ddf
The following is my playground code. Any of the apple audio units show the plugin view, however anything else (i.e. kontakt, spitfire, etc.) does not. It does not error, just where the visual is expected is blank.
import AppKit
import PlaygroundSupport
import AudioToolbox
import AVFoundation
import CoreAudioKit
let manager = AVAudioUnitComponentManager.shared()
let description = AudioComponentDescription(componentType: kAudioUnitType_MusicDevice,
componentSubType: 0,
componentManufacturer: 0,
componentFlags: 0,
componentFlagsMask: 0)
var deviceComponents = manager.components(matching: description)
var names = deviceComponents.map{$0.name}
let pluginName: String = "AUSampler" // This works
//let pluginName: String = "Kontakt" // This does not
var plugin = deviceComponents.filter{$0.name.contains(pluginName)}.first!
print("Plugin name: \(plugin.name)")
var customViewController:NSViewController?
AVAudioUnit.instantiate(with: plugin.audioComponentDescription, options: []){avAudioUnit, error in
var ilip = avAudioUnit!.auAudioUnit.isLoadedInProcess
print("Loaded in process: \(ilip)")
guard error == nil else {
print("Error: \(error!.localizedDescription)")
return
}
print("AudioUnit successfully created.")
let audioUnit = avAudioUnit!.auAudioUnit
audioUnit.requestViewController{ vc in
if let viewCtrl = vc {
customViewController = vc
var b = vc?.view.bounds
PlaygroundPage.current.liveView = vc
print("Successfully added view controller.")
}else{
print("Failed to load controller.")
}
}
}
Hi everyone,
I'm running into an issue with AVAudioRecorder when handling interruptions such as phone calls or alarms.
Problem:
When the app is recording audio and an interruption occurs:
I handle the interruption with audioRecorder?.pause() inside AVAudioSession.interruptionNotification (on .began).
On .ended, I check for .shouldResume and call audioRecorder?.record() again.
The recorder resumes successfully, but only the audio recorded after the interruption is saved. The audio recorded before the interruption is lost, even though I'm using the same file URL and not recreating the recorder.
Repro:
Start a recording with AVAudioRecorder
Simulate a system interruption (e.g., incoming call)
Resume recording after the interruption
Stop and inspect the output audio file
Expected: Full audio (before and after interruption) should be saved.
Actual: Only the audio after interruption is saved; the earlier part is missing
Notes:
According to the documentation, calling .record() after .pause() should resume recording into the same file.
I confirmed that the file URL does not change, and I do not recreate the recorder instance.
No error is thrown by the system during this process.
This behavior happens consistently when the app is interrupted and resumed.
Question:
Is this a known issue? Is there a recommended workaround for preserving the full recording when interruptions happen?
Thanks in advance!
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured.
Setup:
Main App (Flutter) and TTS Audio Unit Extension share the same App Group
App Group is properly configured in developer portal and entitlements
Main app successfully creates and uses files in the container
Container structure shows existing directories (config/, dictionary/) with populated files
Both targets have App Group capability enabled and entitlements set
Current behavior:
Extension can access/read the App Group container
Extension can see existing directories and files
All write attempts are blocked with "sandbox deny(1) file-write-create" errors
Code example:
const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) {
NSString* groupIdStr = [NSString stringWithUTF8String:groupId];
NSString* componentStr = [NSString stringWithUTF8String:component];
NSURL* url = [[NSFileManager defaultManager]
containerURLForSecurityApplicationGroupIdentifier:groupIdStr];
NSURL* fullPath = [url URLByAppendingPathComponent:componentStr];
NSError *error = nil;
if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path
withIntermediateDirectories:YES
attributes:nil
error:&error]) {
NSLog(@"Unable to create directory %@", error.localizedDescription);
}
return [[fullPath path] UTF8String];
}
Error output:
Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace
Key questions:
Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers?
Is this a known limitation of TTS Audio Unit Extensions?
What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions?
If writing to App Group containers is not supported, what alternatives are available?
Current entitlements:
<dict>
<key>com.apple.security.application-groups</key>
<array>
<string>group.com.<company>.<appname></string>
</array>
</dict>
Hi. I work on an audio app for iOS which is successfully using the MPRemoteCommandCenter for commands like next, back, skip forward, skip backward etc.
I am trying to implement playback rate controls in my app (so that users can change the playback speed of audio to 0.5x or 2x for example).
While the above commands work, the changePlaybackRateCommand does not seem to. I have enabled the command, given it a target/handler and set supported rates. With the other commands, this caused the UI to change on lock screen, in command center etc, by adding the control for the command (a next button for the next command for example). However, it does not seem to do anything for the playback rate command.
I can implement my own "rate button" UI and rate change handling, but I'm wondering if this is a known bug within Apple? Looking online, it seems other people face the same issue and haven't been able to get this command to work. Why is this API provided if it doesn't seem to do anything? Is there something I'm missing?
Kind regards.
Topic:
Media Technologies
SubTopic:
Audio
Hello everyone,
I am working on an app that allows you to review your own music using Apple Music. Currently I am running into an issue with the skipping forwards and backwards outside of the app.
How it should work: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song on an album should play and the information should change to reflect that in the app.
If you play a song in Apple Music, you can see a Now Playing view in the lock screen.
When you skip forward or backwards, it will do either action and it would reflect that when you see a little frequency icon on artwork image of a song.
What it's doing: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song is reflected outside of the app, but not in the app.
When skipping a song outside of the app, it works correctly to head to the next song.
But when I return to the app, it is not reflected
NOTE: I am not using MusicKit variables such as Track, Album to display the songs. Since I want to grab the songs and review them I need a rating so I created my own that grabs the MusicItemID, name, artist(s), etc.
NOTE: I am using ApplicationMusicPlayer.shared
Is there a way to get the song to reflect in my app?
(If its easier, a simple example of it would be nice. No need to create an entire xprod file)
According to the header file the outputVolume properties supported range is 0.0-1.0:
/*! @property outputVolume
@abstract The mixer's output volume.
@discussion
This accesses the mixer's output volume (0.0-1.0, inclusive).
@property (nonatomic) float outputVolume;
However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume?
Thanks
Hi. I am working on an audio app for iOS. I have added the CPNowPlayingPlaybackRateButton to my CPNowPlayingTemplate.
When the button is clicked, my handler changes the rate in the AVPlayer and updates the MPNowPlayingInfoCenter to the new rate, for example, 2.0.
Throughout, the Carplay button always displays "0x". I am wondering how to get this UI to accurately reflect the playback rate the user has selected, as always displaying 0x is a poor user experience.
You may suggest MPChangePlaybackRateCommand is relevant here, but I have not been able to get that to work either, and judging by posts online, not many other people have either. I have made a post about that here: https://developer.apple.com/forums/thread/773099
Is this a known Apple bug? Is there a way to get the UI to accurately reflect the playback rate of my audio?
Kind regards.
Topic:
Media Technologies
SubTopic:
Audio
I'm working on adding CarPlay support to an audio app and am running into an issue. Occasionally, when a user opens the app from CarPlay while the main app scene is either not connected or is currently in the background, I will receive an error when attempting to activate the audio session. The code below mimics my setup:
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio)
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print(error) // NSOSStatusErrorDomain - 560557684: Session activation failed
}
That error code maps to AVAudioSession.ErrorCode.cannotInterruptOthers.
Once in this state, all subsequent attempts to play different pieces of content will fail. However, things will start working normally if the user opens the app on their phone and tries again from CarPlay (while the app is in the foreground on their phone).
I'm not sure why it would behave this way and want to note that I do have the audio background mode capability enabled.
Has anyone else encountered this? Are there any workarounds or changes I could make to prevent this from happening?
Hi. I am working on an audio app for iOS. I have implemented UI and handling which allows the user to change playback rate of audio. When the user selects a different rate, I update the rate property on my AVQueuePlayer. This is working well on device.
When I use Airplay, it works for some devices and not for others. Some devices won't change playback rate and will always play at 1x speed.
Is this possibly a limitation of those 3rd-party devices? Or is there something I'm missing/should check? Would love to get playback rate changes working across all Airplay devices with our app.
Kind regards.
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player)
the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession
Sample code below
the problem I am having is that .duckOthers is not ducking the Application Music Player output
Is this a bug or am I doing this wrong?
// Configure audio session for system-wide ducking
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
// Set the ducking level to maximum
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005)
// Create and configure audio player
self.audioPlayer = try AVAudioPlayer(data: audioData)
self.audioPlayer?.delegate = self
self.audioPlayer?.volume = 1.0 // Ensure full volume for speech
self.audioPlayer?.prepareToPlay()
// Set the audio player's settings for maximum clarity
self.audioPlayer?.enableRate = false
self.audioPlayer?.pan = 0.0 // Center the audio
self.audioPlayer?.play()
Hello!
I have a problem with getting album extended info from users library. Note that app authorised to use Apple Music according documentation.
I get albums from users library with this code:
func getLibraryAlbums() async throws -> MusicItemCollection<Album> {
let request = MusicLibraryRequest<Album>()
let response = try await request.response()
return response.items
}
This is an example of Albums request respones:
{
"data" : [
{
"meta" : {
"musicKit_identifierSet" : {
"isLibrary" : true,
"id" : "1945382328890400383",
"dataSources" : [
"localLibrary",
"legacyModel"
],
"type" : "Album",
"deviceLocalID" : {
"databaseID" : "37336CB19CF51727",
"value" : "1945382328890400383"
},
"catalogID" : {
"kind" : "adamID",
"value" : "1173535954"
}
}
},
"id" : "1945382328890400383",
"type" : "library-albums",
"attributes" : {
"artwork" : {
"url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=4A2F444C%2D336D%2D49EA%2D90C8%2D13C547A5B95B",
"width" : 0,
"height" : 0
},
"genreNames" : [
"Pop"
],
"trackCount" : 1,
"artistName" : "Сара Окс",
"isAppleDigitalMaster" : false,
"audioVariants" : [
"lossless"
],
"playParams" : {
"catalogId" : "1173535954",
"id" : "1945382328890400383",
"musicKit_persistentID" : "1945382328890400383",
"kind" : "album",
"musicKit_databaseID" : "37336CB19CF51727",
"isLibrary" : true
},
"name" : "Нимфомания - Single",
"isCompilation" : false
}
},
{
"meta" : {
"musicKit_identifierSet" : {
"isLibrary" : true,
"id" : "-8570883332059662437",
"dataSources" : [
"localLibrary",
"legacyModel"
],
"type" : "Album",
"deviceLocalID" : {
"value" : "-8570883332059662437",
"databaseID" : "37336CB19CF51727"
},
"catalogID" : {
"kind" : "adamID",
"value" : "1618488499"
}
}
},
"id" : "-8570883332059662437",
"type" : "library-albums",
"attributes" : {
"isCompilation" : false,
"genreNames" : [
"Pop"
],
"trackCount" : 1,
"artistName" : "TIMOFEEW & KURYANOVA",
"isAppleDigitalMaster" : false,
"audioVariants" : [
"lossless"
],
"playParams" : {
"catalogId" : "1618488499",
"musicKit_persistentID" : "-8570883332059662437",
"kind" : "album",
"id" : "-8570883332059662437",
"musicKit_databaseID" : "37336CB19CF51727",
"isLibrary" : true
},
"artwork" : {
"url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=BEA6DBD3%2D8E14%2D4A10%2D97BE%2D8908C7C5FC2C",
"width" : 0,
"height" : 0
},
"name" : "Не звони - Single"
}
},
...
]
}
In AlbumView using task: view modifier I request extended information about the album with this code:
func loadExtendedInfo(_ album: Album) async throws -> Album {
let response = try await album.with([.tracks, .audioVariants, .recordLabels], preferredSource: .library)
return response
}
but in the response some of the fields are always nil, for example recordLabels, releaseDate, url, editorialNotes, copyright.
Please tell me what I'm doing wrong?
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet.
**func installTap(
onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: @escaping AVAudioNodeTapBlock
)
**
It works perfectly fine.
But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
Getting MatchError "MATCH_ATTEMPT_FAILED" everytime when matchstream on Android Studio Java+Kotlin project. My project reads the samples from the mic input using audioRecord class and sents them to the Shazamkit to matchstream. I created a kotlin class to handle to Shazamkit. The audioRecord is build to be mono and 16 bit.
My Kotlin Class
class ShazamKitHelper {
val shazamScope = CoroutineScope(Dispatchers.IO + SupervisorJob())
lateinit var streaming_session: StreamingSession
lateinit var signature: Signature
lateinit var catalog: ShazamCatalog
fun createStreamingSessionAsync(developerTokenProvider: DeveloperTokenProvider, readBufferSize: Int, sampleRate: AudioSampleRateInHz
): CompletableFuture<Unit>{
return CompletableFuture.supplyAsync {
runBlocking {
runCatching {
shazamScope.launch {
createStreamingSession(developerTokenProvider,readBufferSize,sampleRate)
}.join()
}.onFailure { throwable ->
}.getOrThrow()
}
}
}
private suspend fun createStreamingSession(developerTokenProvider:DeveloperTokenProvider,readBufferSize: Int,sampleRateInHz: AudioSampleRateInHz) {
catalog = ShazamKit.createShazamCatalog(developerTokenProvider)
streaming_session = (ShazamKit.createStreamingSession(
catalog,
sampleRateInHz,
readBufferSize
) as ShazamKitResult.Success).data
}
fun startMatching() {
val audioData = sharedAudioData ?: return // Return if sharedAudioData is null
CoroutineScope(Dispatchers.IO).launch {
runCatching {
streaming_session.matchStream(audioData.data, audioData.meaningfulLengthInBytes, audioData.timestampInMs)
}.onFailure { throwable ->
Log.e("ShazamKitHelper", "Error during matchStream", throwable)
}
}
}
@JvmField
var sharedAudioData: AudioData? = null;
data class AudioData(val data: ByteArray, val meaningfulLengthInBytes: Int, val timestampInMs: Long)
fun startListeningForMatches() {
CoroutineScope(Dispatchers.IO).launch {
streaming_session.recognitionResults().collect { matchResult ->
when (matchResult) {
is MatchResult.Match -> {
val match = matchResult.matchedMediaItems
println("Match found: ${match.get(0).title} by ${match.get(0).artist}")
}
is MatchResult.NoMatch -> {
println("No match found")
}
is MatchResult.Error -> {
val error = matchResult.exception
println("Match error: ${error.message}")
}
}
}
}
}
}
My code in java reads the samples from a thread:
shazam_create_session();
while (audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){
if (shazam_session_created){
byte[] buffer = new byte[288000];//max_shazam_seconds * sampleRate * 2];
audioRecord.read(buffer,0,buffer.length,AudioRecord.READ_BLOCKING);
helper.sharedAudioData = new ShazamKitHelper.AudioData(buffer,buffer.length,System.currentTimeMillis());
helper.startMatching();
if (!listener_called){
listener_called = true;
helper.startListeningForMatches();
}
} else{
SystemClock.sleep(100);
}
}
private void shazam_create_session() {
MyDeveloperTokenProvider provider = new MyDeveloperTokenProvider();
AudioSampleRateInHz sample_rate = AudioSampleRateInHz.SAMPLE_RATE_48000;
if (sampleRate == 44100)
sample_rate = AudioSampleRateInHz.SAMPLE_RATE_44100;
CompletableFuture<Unit> future = helper.createStreamingSessionAsync(provider, 288000, sample_rate);
future.thenAccept(result -> {
shazam_session_created = true;
});
future.exceptionally(throwable -> {
Toast.makeText(mine, "Failure", Toast.LENGTH_SHORT).show();
return null;
});
}
I Implemented the developer token in java as follows
public static class MyDeveloperTokenProvider implements DeveloperTokenProvider {
DeveloperToken the_token = null;
@NonNull
@Override
public DeveloperToken provideDeveloperToken() {
if (the_token == null){
try {
the_token = generateDeveloperToken();
return the_token;
} catch (NoSuchAlgorithmException | InvalidKeySpecException e) {
throw new RuntimeException(e);
}
} else{
return the_token;
}
}
@NonNull
private DeveloperToken generateDeveloperToken() throws NoSuchAlgorithmException, InvalidKeySpecException {
PKCS8EncodedKeySpec priPKCS8 = new PKCS8EncodedKeySpec(Decoders.BASE64.decode(p8));
PrivateKey appleKey = KeyFactory.getInstance("EC").generatePrivate(priPKCS8);
Instant now = Instant.now();
Instant expiration = now.plus(Duration.ofDays(90));
String jwt = Jwts.builder()
.header().add("alg", "ES256").add("kid", keyId).and()
.issuer(teamId)
.issuedAt(Date.from(now))
.expiration(Date.from(expiration))
.signWith(appleKey) // Specify algorithm explicitly
.compact();
return new DeveloperToken(jwt);
}
}
Does anyone know how to pronounce the sound of a specific instrument when you tap a button on the screen on your iPhone or iPad? Now, in the middle of creating a music learning app, I'm thinking of assigning monotones or chords to the button-like frames on the keyboard and fingerboard on the screen. Can it be achieved with SwiftUI chords alone? Once upon a time, MIDI level 1 I remember that there was a pronunciation function of the instrument, but I don't think about implementing the same function in the current OS. Please lend me your wisdom.
Topic:
Media Technologies
SubTopic:
Audio
Hello,
i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method :
suspend fun processAudioFileInBackground(
filePath: String,
developerTokenProvider: DeveloperTokenProvider
) = withContext(Dispatchers.IO) {
val bufferSize = 1024 * 1024
val audioFile = FileInputStream(filePath)
val byteBuffer = ByteBuffer.allocate(bufferSize)
byteBuffer.order(ByteOrder.LITTLE_ENDIAN)
var bytesRead: Int
while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) {
val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data
signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis())
val signature = signatureGenerator.generateSignature()
println("Signature: ${signature.durationInMs}")
val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH)
val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data
val matchResult = session.match(signature)
println("MatchResult : $matchResult")
setMatchResult(matchResult)
byteBuffer.clear()
}
audioFile.close()
}
I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?