// Here addObserver for routeChangeNotification
func testAudioRoute() {
// My app is an VoIP app, so I need to set "playAndRecord" and "allowBluetooth"
try? AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.duckOthers, .allowBluetooth, .allowBluetoothA2DP])
NotificationCenter.default.addObserver(self, selector: #selector(currentRouteChanged(noti:)), name: AVAudioSession.routeChangeNotification, object: nil)
}
// Print the "availableInputs" once got a notification
@objc func currentRouteChanged(noti: Notification) {
let availableInputs = AVAudioSession.sharedInstance().availableInputs?.compactMap({ $0.portType }) ?? []
let currentRouteInputs = AVAudioSession.sharedInstance().currentRoute.inputs.compactMap({ $0.portType })
let currentRouteOutputs = AVAudioSession.sharedInstance().currentRoute.outputs.compactMap({ $0.portType })
print("willtest: \navailableInputs=\(availableInputs), \ncurrentRouteInputs=\(currentRouteInputs), \ncurrentRouteOutputs=\(currentRouteOutputs)")
/*
When BT (Airpods pro 2) CONNECTTED: it will print like below when notification comes, this is correct.
----------------------------------------------------------
willtest:
availableInputs=[__C.AVAudioSessionPort(_rawValue: MicrophoneBuiltIn), __C.AVAudioSessionPort(_rawValue: BluetoothHFP)],
currentRouteInputs=[],
currentRouteOutputs=[__C.AVAudioSessionPort(_rawValue: BluetoothA2DPOutput)]
----------------------------------------------------------
When BT (Airpods pro 2) DISCONNECTTED: it will print like below when notification comes, this is wrong.
----------------------------------------------------------
availableInputs=[__C.AVAudioSessionPort(_rawValue: MicrophoneBuiltIn), __C.AVAudioSessionPort(_rawValue: BluetoothHFP)],
currentRouteInputs=[],
currentRouteOutputs=[__C.AVAudioSessionPort(_rawValue: Speaker)]
*/
}
So my question here is:
Why does the "availableInputs" still contain the "C.AVAudioSessionPort(_rawValue: BluetoothHFP)" item even though I have already disconnected the BT device? (Put AirPods in the case.)
BTW, if I tap the "Manual" button once I disconnected the BT, it also prints the "wrong" value for "availableInputs", and it will become normal after about 3~4 seconds.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am having trouble accessing the lastPlayedData for any given album or track using MusicKit. The value is always nil, both on numerous albums and tracks I tested.
Afaik this is not a property that has to be fetched separately like tracks for example.
I am running this on my physical iPhone 12 18.1.1 with Xcode 16.1. The albums and tracks have definitely been played multiple times before. The app has permission to the library using MusicAuthorization.request()
This post mentions the same problem but offers no solution.
Thanks for any help
I'm trying to setup a listener for kAudioProcessPropertyIsRunningOutput but it's never triggered. I get calls for kAudioProcessPropertyIsRunning and kAudioProcessPropertyDevices but not for kAudioProcessPropertyIsRunningInput or kAudioProcessPropertyIsRunningOutput.
class MyDelegate: PropertyListenerDelegate {
func propertiesChanged(properties: [AudioObjectPropertyAddress]) {
print(properties)
}
}
var myDelegate = MyDelegate()
var processes = try AudioHardwareSystem.shared.processes
for process in processes {
process.delegates += [myDelegate]
try process.addListener(forProperties: [AudioObjectPropertyAddress(mSelector: kAudioPropertyWildcardPropertyID, mScope: kAudioObjectPropertyScopeWildcard, mElement: kAudioObjectPropertyElementWildcard)])
}
Xcode 16.1
macOS 15.0.1
I’m experiencing an unusual audio issue with AirPods on macOS Sequoia while developing VoIP applications like Zoom and FaceTime.
When AirPods are connected, the other party’s voice sometimes sounds unnaturally stretched (approximately twice as long).
This problem can be temporarily fixed by switching the sound output settings from AirPods to speakers and then back to AirPods.
From our analysis, the issue appears to be related to the sample rate provided by AudioObjectGetPropertyData.
Here’s what we’ve observed:
When the issue occurs, the AudioStreamBasicDescription.sampleRate for AirPods is reported as 48000.
Under normal conditions, it’s reported as 24000.
It seems like the system is mistakenly returning a sample rate that doesn’t match the AirPods’ actual settings, perhaps defaulting to a system speaker value.
Once the output setting is toggled, the correct sampleRate (24000) is retrieved.
This discrepancy causes our application to transmit the audio stream at 48000, leading to the distorted playback.
Has anyone encountered a similar issue or knows how to resolve it?
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code.
So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock;
Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback:
[playerNode installTapOnBus:bus
bufferSize:bufferSize
format:format
block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
//Inspect current audio here and fire...
}];
[playerNode scheduleBuffer:fullbuffer
atTime:startTime
options:0
completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType)
{
// some code is here, not important to this question.
}];
The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled).
Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Topic:
Media Technologies
SubTopic:
Audio
Tags:
AVAudioNode
AVAudioSession
AVAudioEngine
AVFoundation
I'd like to find out: Can backgrounded apps record audio?
In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything.
However I'm not familiar with the current state of affairs.
With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone?
Thanks.
Topic:
Media Technologies
SubTopic:
Audio
Description
As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency.
Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced.
Steps to Reproduce:
Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center).
Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug)
Build and install the example project
Attach a headset and launch the application
Observe console logs showing
a requested buffer size of 0.005805 (256 samples @ 48k)
an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests)
Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback)
Launch the application
Observe
Same result from step 4
Mismatched hardware buffer size of 1104 and recorded frame count of 1024
Mismatched playbackCount and recordCount
Quit the app and disable vocal shortcuts
Launch the app
Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior)
Expected results:
Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced
Input and output buffer sizes match
Device(s): iPhone 11 Pro, iPad Pro
OS: iOS 18.0.1
Environment: Xcode 16.1
FB: FB15715421
Related to: https://forums.developer.apple.com/forums/thread/765477
In our app we have implemented a AVAssetResourceLoaderDelegate to handle encrypted downloaded files. We have it working on all iOS versions but we are seeing issues on iOS 15 (15.8.3) with large files (> 1 Gb). We have so far seen two cases where either the load method on the AVURLAsset fails early and throws an unknown error code or starts requesting more data than the device has available RAM. The CPU usage is almost always over 100%, even after pausing playback. The memory issue can happen even though the player has successfully started playback.
When running this on devices running iOS 16 and above we set the isEntireLengthAvailableOnDemand to true on the AVAssetResourceLoadingContentInformationRequest. This seems to be key to solving the issue those devices that support it. If we set the property to false we see the same memory issue as on iOS 15.
So we have a solution for iOS 16 and upwards but are at a loss for how to handle iOS 15. Is there something we have overlooked or is it in fact an issue with that iOS version?
Hi!
I am creating a aumi AUv3 extension and I am trying to achieve simultaneous connections to multiple other avaudionodes. I would like to know it is possible to route the midi to different outputs inside the render process in the AUv3.
I am using connectMIDI(_:to:format:eventListBlock:) to connect the output of the AUv3 to multiple AvAudioNodes. However, when I send midi out of the AUv3, it gets sent to all the AudioNodes connected to it. I can't seem to find any documentation on how to route the midi only to one of the connected nodes. Is this possible?
Hello,
I am trying to follow the getting started guide. I have produced a developer token via the music kit embedding approach and can confirm I'm successfully authorized.
When I try to do play music, I'm unable to hear anything. Thought it could be some auto-play problems with the browser, but it doesn't appear to be related, as I can trigger play from a button with no further success.
const music = MusicKit.getInstance()
try {
await music.authorize() // successful
const result = await music.api.music(`/v1/catalog/gb/search`, {
term: 'Sound Travels',
types: 'albums',
})
await music.play()
} catch (error) {
console.error('play error', error) // ! No error triggered
}
I have searched the forum, have found similar queries but apparently none using V3 of the API.
Other potentially helpful information:
OS: macos 15.1 (24B83)
API version: V3
On localhost
Browser: Arc (chromium based), also tried on Safari,
The only difference between the two browsers is that safari appears to exit the breakpoint, whereas Arc will continue (without throwing any errors)
authorizationStatus: 3
Side note, any reason this is still in beta so many years later?
The following is my playground code. Any of the apple audio units show the plugin view, however anything else (i.e. kontakt, spitfire, etc.) does not. It does not error, just where the visual is expected is blank.
import AppKit
import PlaygroundSupport
import AudioToolbox
import AVFoundation
import CoreAudioKit
let manager = AVAudioUnitComponentManager.shared()
let description = AudioComponentDescription(componentType: kAudioUnitType_MusicDevice,
componentSubType: 0,
componentManufacturer: 0,
componentFlags: 0,
componentFlagsMask: 0)
var deviceComponents = manager.components(matching: description)
var names = deviceComponents.map{$0.name}
let pluginName: String = "AUSampler" // This works
//let pluginName: String = "Kontakt" // This does not
var plugin = deviceComponents.filter{$0.name.contains(pluginName)}.first!
print("Plugin name: \(plugin.name)")
var customViewController:NSViewController?
AVAudioUnit.instantiate(with: plugin.audioComponentDescription, options: []){avAudioUnit, error in
var ilip = avAudioUnit!.auAudioUnit.isLoadedInProcess
print("Loaded in process: \(ilip)")
guard error == nil else {
print("Error: \(error!.localizedDescription)")
return
}
print("AudioUnit successfully created.")
let audioUnit = avAudioUnit!.auAudioUnit
audioUnit.requestViewController{ vc in
if let viewCtrl = vc {
customViewController = vc
var b = vc?.view.bounds
PlaygroundPage.current.liveView = vc
print("Successfully added view controller.")
}else{
print("Failed to load controller.")
}
}
}
Hello!
I have a problem with getting album extended info from users library. Note that app authorised to use Apple Music according documentation.
I get albums from users library with this code:
func getLibraryAlbums() async throws -> MusicItemCollection<Album> {
let request = MusicLibraryRequest<Album>()
let response = try await request.response()
return response.items
}
This is an example of Albums request respones:
{
"data" : [
{
"meta" : {
"musicKit_identifierSet" : {
"isLibrary" : true,
"id" : "1945382328890400383",
"dataSources" : [
"localLibrary",
"legacyModel"
],
"type" : "Album",
"deviceLocalID" : {
"databaseID" : "37336CB19CF51727",
"value" : "1945382328890400383"
},
"catalogID" : {
"kind" : "adamID",
"value" : "1173535954"
}
}
},
"id" : "1945382328890400383",
"type" : "library-albums",
"attributes" : {
"artwork" : {
"url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=4A2F444C%2D336D%2D49EA%2D90C8%2D13C547A5B95B",
"width" : 0,
"height" : 0
},
"genreNames" : [
"Pop"
],
"trackCount" : 1,
"artistName" : "Сара Окс",
"isAppleDigitalMaster" : false,
"audioVariants" : [
"lossless"
],
"playParams" : {
"catalogId" : "1173535954",
"id" : "1945382328890400383",
"musicKit_persistentID" : "1945382328890400383",
"kind" : "album",
"musicKit_databaseID" : "37336CB19CF51727",
"isLibrary" : true
},
"name" : "Нимфомания - Single",
"isCompilation" : false
}
},
{
"meta" : {
"musicKit_identifierSet" : {
"isLibrary" : true,
"id" : "-8570883332059662437",
"dataSources" : [
"localLibrary",
"legacyModel"
],
"type" : "Album",
"deviceLocalID" : {
"value" : "-8570883332059662437",
"databaseID" : "37336CB19CF51727"
},
"catalogID" : {
"kind" : "adamID",
"value" : "1618488499"
}
}
},
"id" : "-8570883332059662437",
"type" : "library-albums",
"attributes" : {
"isCompilation" : false,
"genreNames" : [
"Pop"
],
"trackCount" : 1,
"artistName" : "TIMOFEEW & KURYANOVA",
"isAppleDigitalMaster" : false,
"audioVariants" : [
"lossless"
],
"playParams" : {
"catalogId" : "1618488499",
"musicKit_persistentID" : "-8570883332059662437",
"kind" : "album",
"id" : "-8570883332059662437",
"musicKit_databaseID" : "37336CB19CF51727",
"isLibrary" : true
},
"artwork" : {
"url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=BEA6DBD3%2D8E14%2D4A10%2D97BE%2D8908C7C5FC2C",
"width" : 0,
"height" : 0
},
"name" : "Не звони - Single"
}
},
...
]
}
In AlbumView using task: view modifier I request extended information about the album with this code:
func loadExtendedInfo(_ album: Album) async throws -> Album {
let response = try await album.with([.tracks, .audioVariants, .recordLabels], preferredSource: .library)
return response
}
but in the response some of the fields are always nil, for example recordLabels, releaseDate, url, editorialNotes, copyright.
Please tell me what I'm doing wrong?
Getting MatchError "MATCH_ATTEMPT_FAILED" everytime when matchstream on Android Studio Java+Kotlin project. My project reads the samples from the mic input using audioRecord class and sents them to the Shazamkit to matchstream. I created a kotlin class to handle to Shazamkit. The audioRecord is build to be mono and 16 bit.
My Kotlin Class
class ShazamKitHelper {
val shazamScope = CoroutineScope(Dispatchers.IO + SupervisorJob())
lateinit var streaming_session: StreamingSession
lateinit var signature: Signature
lateinit var catalog: ShazamCatalog
fun createStreamingSessionAsync(developerTokenProvider: DeveloperTokenProvider, readBufferSize: Int, sampleRate: AudioSampleRateInHz
): CompletableFuture<Unit>{
return CompletableFuture.supplyAsync {
runBlocking {
runCatching {
shazamScope.launch {
createStreamingSession(developerTokenProvider,readBufferSize,sampleRate)
}.join()
}.onFailure { throwable ->
}.getOrThrow()
}
}
}
private suspend fun createStreamingSession(developerTokenProvider:DeveloperTokenProvider,readBufferSize: Int,sampleRateInHz: AudioSampleRateInHz) {
catalog = ShazamKit.createShazamCatalog(developerTokenProvider)
streaming_session = (ShazamKit.createStreamingSession(
catalog,
sampleRateInHz,
readBufferSize
) as ShazamKitResult.Success).data
}
fun startMatching() {
val audioData = sharedAudioData ?: return // Return if sharedAudioData is null
CoroutineScope(Dispatchers.IO).launch {
runCatching {
streaming_session.matchStream(audioData.data, audioData.meaningfulLengthInBytes, audioData.timestampInMs)
}.onFailure { throwable ->
Log.e("ShazamKitHelper", "Error during matchStream", throwable)
}
}
}
@JvmField
var sharedAudioData: AudioData? = null;
data class AudioData(val data: ByteArray, val meaningfulLengthInBytes: Int, val timestampInMs: Long)
fun startListeningForMatches() {
CoroutineScope(Dispatchers.IO).launch {
streaming_session.recognitionResults().collect { matchResult ->
when (matchResult) {
is MatchResult.Match -> {
val match = matchResult.matchedMediaItems
println("Match found: ${match.get(0).title} by ${match.get(0).artist}")
}
is MatchResult.NoMatch -> {
println("No match found")
}
is MatchResult.Error -> {
val error = matchResult.exception
println("Match error: ${error.message}")
}
}
}
}
}
}
My code in java reads the samples from a thread:
shazam_create_session();
while (audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){
if (shazam_session_created){
byte[] buffer = new byte[288000];//max_shazam_seconds * sampleRate * 2];
audioRecord.read(buffer,0,buffer.length,AudioRecord.READ_BLOCKING);
helper.sharedAudioData = new ShazamKitHelper.AudioData(buffer,buffer.length,System.currentTimeMillis());
helper.startMatching();
if (!listener_called){
listener_called = true;
helper.startListeningForMatches();
}
} else{
SystemClock.sleep(100);
}
}
private void shazam_create_session() {
MyDeveloperTokenProvider provider = new MyDeveloperTokenProvider();
AudioSampleRateInHz sample_rate = AudioSampleRateInHz.SAMPLE_RATE_48000;
if (sampleRate == 44100)
sample_rate = AudioSampleRateInHz.SAMPLE_RATE_44100;
CompletableFuture<Unit> future = helper.createStreamingSessionAsync(provider, 288000, sample_rate);
future.thenAccept(result -> {
shazam_session_created = true;
});
future.exceptionally(throwable -> {
Toast.makeText(mine, "Failure", Toast.LENGTH_SHORT).show();
return null;
});
}
I Implemented the developer token in java as follows
public static class MyDeveloperTokenProvider implements DeveloperTokenProvider {
DeveloperToken the_token = null;
@NonNull
@Override
public DeveloperToken provideDeveloperToken() {
if (the_token == null){
try {
the_token = generateDeveloperToken();
return the_token;
} catch (NoSuchAlgorithmException | InvalidKeySpecException e) {
throw new RuntimeException(e);
}
} else{
return the_token;
}
}
@NonNull
private DeveloperToken generateDeveloperToken() throws NoSuchAlgorithmException, InvalidKeySpecException {
PKCS8EncodedKeySpec priPKCS8 = new PKCS8EncodedKeySpec(Decoders.BASE64.decode(p8));
PrivateKey appleKey = KeyFactory.getInstance("EC").generatePrivate(priPKCS8);
Instant now = Instant.now();
Instant expiration = now.plus(Duration.ofDays(90));
String jwt = Jwts.builder()
.header().add("alg", "ES256").add("kid", keyId).and()
.issuer(teamId)
.issuedAt(Date.from(now))
.expiration(Date.from(expiration))
.signWith(appleKey) // Specify algorithm explicitly
.compact();
return new DeveloperToken(jwt);
}
}
Does Phase support creating new sound events at runtime? Is that implemented in the plugin for Unity as well? Does Phase support Unity's addressable system, are they compatible?
I am unable to access the Int32 error from the errors that CoreAudio throws in Swift type AudioHardwareError. This is critical. There is no way to access the errors or even create an AudioHardwareError to test for errors.
do {
_ = try AudioHardwareDevice(id: 0).streams // will throw
} catch {
if let error = error as? AudioHardwareError { // cast to AudioHardwareError
print(error) // prints error code but not the errorDescription
}
}
How can get reliably get the error.Int32? Or create a AudioHardwareError with an error constant? There is no way for me to handle these error with code or run tests without knowing what the error is.
On top of that, by default the error localizedDescription does not contain the errorDescription unless I extend AudioHardwareError with CustomStringConvertible.
extension AudioHardwareError: @retroactive CustomStringConvertible {
public var description: String {
return self.localizedDescription
}
}
The problem I have at the moment is that if a phone call comes in during my recording, even if I don't answer, my recording will be interrupted
The phenomenon of recording interruption is that the picture is stuck, and the recording can be resumed automatically after the call is over. But it will cause the recorded video sound and painting out of sync
Through the AVCaptureSessionWasInterrupted listening, I can get to record the types of alerts and interrupt
As far as I can tell, a ringing or vibrating phone can block the audio channel. I found the same scenario in other apps, you can turn off the ring tone or vibration, but I don't know how to do it, I tried a lot of ways, but it doesn't work
BlackmagicCam or ProMovie App, when a call comes in during recording, there will only be a notification menu, and there will be no ringtone or vibration, which solves the problem of recording interruption
I don't know if this requires some configuration or application, please let me know if it does
Hello all! I've been having this issue for a while, on my iPhone 12 Pro.
The volume when listening to music, watching YouTube, TikTok, etc. It will randomly lower, but the actual audio slider won't it will still be at max volume but get very quiet. I've followed other instructions such as turn off audio awareness, and other settings but nothing seems to be working. And phone calls too Has anyone else had this issue and managed to fix it?
Topic:
Media Technologies
SubTopic:
Audio
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible.
I have the following boilerplate:
export default createHandler(
() => (
<StartServer
document={({ assets, children, scripts }) => {
return (
<html lang="en">
<head>
<meta name="apple-music-developer-token" content={authResult.token} />
<meta name="apple-music-app-name" content="app name" />
<meta name="apple-music-app-build" content="1978.4.1" />
{assets}
<script
src="https://js-cdn.music.apple.com/musickit/v3/musickit.js"
async
/>
</head>
<body>
<div id="app">{children}</div>
{scripts}
</body>
</html>
)
}}
/>
))
When I first load my app, I'll encounter:
musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node')
at musickit.js:13:10194
at musickit.js:13:140
at musickit.js:13:209
The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
We are using a VoiceProcessingIO audio unit in our VoIP application on Mac. In certain scenarios, the AudioComponentInstanceNew call blocks for up to five seconds (at least two). We are using the following code to initialize the audio unit:
OSStatus status;
AudioComponentDescription desc;
AudioComponent inputComponent;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
inputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew(inputComponent, &unit);
We are having the issue with current MacOS versions on a host of different Macs (x86 and x64 alike). It takes two to three seconds until AudioComponentInstanceNew returns.
We also see the following errors in the log multiple times:
AUVPAggregate.cpp:2560 AggInpStreamsChanged wait failed
and those right after (which I don't know if they matter to this issue):
KeystrokeSuppressorCore.cpp:44 ERROR: KeystrokeSuppressor initialization was unsuccessful. Invalid or no plist was provided. AU will be bypassed. vpStrategyManager.mm:486 Error code 2003332927 reported at GetPropertyInfo
After updating to 18.3 my iPad 7th generation has no sound and will not play videos