AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

91 Posts
Sort by:
Post not yet marked as solved
0 Replies
46 Views
My app is trying to continuously record audio from the background. Due to user feedback, I'm setting the AVAudioSession to use the .multiRoute category and .mixWithOthers options. This is because otherwise, if the device is connected to a car with CarPlay, output from the car's radio is muted. The only drawback seems to be that in this setup, controlling the phone's volume using the hardware volume buttons doesn't work anymore. This, of course, is also disliked by users. I've searched the docs and this and other forums for any documentation of this and if there's anything I can do to either setup the session to handle volume changes again or if and how I'm expected to receive notifications of these button presses and how to forward them to the right spot. Unfortunately, I didn't find anything. Can offer any ideas?
Posted
by mss.
Last updated
.
Post not yet marked as solved
0 Replies
89 Views
https://developer.apple.com/videos/play/wwdc2023/10235/ - In this WWDC session, at 3:19 - Apple has introduced **Other audio ducking ** feature In iOS17, we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession ? We are using an AVAudioSession for a VOIP call while concurrently attempting to play a video through an AVPlayer. However, the volume of the AVPlayer is considerably low. Does anyone have any ideas on how to achieve the level of control that AVAudioEngine offers?
Posted
by dhilipr.
Last updated
.
Post not yet marked as solved
0 Replies
98 Views
Hello, I'm developing a voice communication App using Livekit SDK. Everything works fine in the foreground, AudioSession is activated and audio transmitted. However, I would like to add a feature, I would like my app to receive audio even when it's in background or terminated. I know I can run code when the App is in that state by sending a background push notification, but the only thing that is not working in that case is the AudioSession activation. It fails with error "Session activation failed", no more clues. I tried every combination of category and mode, but no success. Bacground modes in XCode have been activated: -Audio, AirPlay, and Picture in Picture -Background Processing Is this a limit of Livekit? I would be grateful if someone can point me into the right direction.
Posted
by enrico-g.
Last updated
.
Post not yet marked as solved
0 Replies
68 Views
Sometimes when I'm putting on or taking off clothes, I accidentally bump the digital crown of my Apple Watch or AirPods Max, and then the volume suddenly becomes very loud, which has been bothering me for a long time. I followed the instructions in https://support.apple.com/zh-sg/guide/iphone/iphb71f9b54d/ios, but I couldn't find the relevant settings. The system prompt is to "Reduce Loud Audio", rather than to lower the volume (iOS 17.4). I searched, but I couldn't find any related apps in the App Store. I asked the AI and it provided a relevant solution, so I want to learn Swift and create an app myself (I've only been learning for less than a week). Here's the solution provided by the AI: The general idea is to listen for the routeChange event of AVAudioSession through NotificationCenter then use MPVolumeView to get the slider, and set the value of the slider to control the volume limit. However, when I debugged it, I found that it didn't work even after setting it. I would like to ask where the problem might be and how I should adjust it? @objc func setMaximumVolume () -> Void { if !enableMaxvolume { return; } let volumeView = MPVolumeView() if let slider = volumeView.subviews.first as? UISlider { slider.value = Float(self.maximumVolume / 100) print("setMaximumVolume: \(slider.value)") } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
124 Views
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Posted Last updated
.
Post not yet marked as solved
0 Replies
103 Views
The application is developed in SwiftUI. Our application is responsible for audio recording, transcribing the audio file and uploading it to the backend. So, the 2 main components on the iOS application are : AVAudioRecorder, SFSpeechRecognizer. The UI compromises a visual design which showcases the recording of audio, and lets the user know if the audio is being recorded on not using a Text component. Lately the customer has been complaining that though the application says “Recording ” on the UI, their audios are not being are not being received at the backend. The customers try restarting there device(iPad) and the application started working normally We haven’t been able to reproduce the issue. But we suspect an intermittent failure in audio transmission or a potential UI freezing. Note : I have tried using Leaks instrument and had not encountered any memory leaks while using the application. Is there a way to determine whether the issue lies with the audio recorder, the speech recognizer, or elsewhere in the app? Are there any known issues or limitations with audio recorder lately on iOS that could be causing this behaviour? Please let me know if you have any suggestions to diagnose this issue. Also, do let me know if more information is required Thank you in advance
Posted
by Tan123.
Last updated
.
Post not yet marked as solved
0 Replies
132 Views
Hey all! I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween) When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio. I wonder how recording in stereo audio works, are there any guides or documentation available for that? Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently? This is my Audio Session code: func configureAudioSession(configuration: CameraConfiguration) throws { ReactLogger.log(level: .info, message: "Configuring Audio Session...") // Prevent iOS from automatically configuring the Audio Session for us audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false let enableAudio = configuration.audio != .disabled // Check microphone permission if enableAudio { let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio) if audioPermissionStatus != .authorized { throw CameraError.permission(.microphone) } } // Remove all current inputs for input in audioCaptureSession.inputs { audioCaptureSession.removeInput(input) } audioDeviceInput = nil // Audio Input (Microphone) if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio input...") guard let microphone = AVCaptureDevice.default(for: .audio) else { throw CameraError.device(.microphoneUnavailable) } let input = try AVCaptureDeviceInput(device: microphone) guard audioCaptureSession.canAddInput(input) else { throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input")) } audioCaptureSession.addInput(input) audioDeviceInput = input } // Remove all current outputs for output in audioCaptureSession.outputs { audioCaptureSession.removeOutput(output) } audioOutput = nil // Audio Output if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio Data output...") let output = AVCaptureAudioDataOutput() guard audioCaptureSession.canAddOutput(output) else { throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output")) } output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue) audioCaptureSession.addOutput(output) audioOutput = output } } This is how I activate the audio session just before I start recording: let audioSession = AVAudioSession.sharedInstance() try audioSession.updateCategory(AVAudioSession.Category.playAndRecord, mode: .videoRecording, options: [.mixWithOthers, .allowBluetoothA2DP, .defaultToSpeaker, .allowAirPlay]) if #available(iOS 14.5, *) { // prevents the audio session from being interrupted by a phone call try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) } if #available(iOS 13.0, *) { // allow system sounds (notifications, calls, music) to play while recording try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true) } audioCaptureSession.startRunning() And this is how I set up the AVAssetWriter: let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType) let format = audioInput.device.activeFormat.formatDescription audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format) audioWriter!.expectsMediaDataInRealTime = true assetWriter.add(audioWriter!) ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.") The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono. Is there anything I'm missing here?
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
0 Replies
137 Views
Hi all, I'm working on an app that involves measuring the heading of one iPhone relative to another iPhone. I need to be able to record audio at the same time from at least 2 of built-in data sources at once. Does anyone know how I can achieve this? I've found that, when using the .measurement mode for an AVAudioSession, the stereo polar pattern is not available. Also, I see that it doesn't seem possible to select multiple data sources. Is there something I'm missing? If this is not possible, why not?
Posted
by nd-0r.
Last updated
.
Post not yet marked as solved
3 Replies
191 Views
I'm facing an issue where I can't play an audio file stored in my project after receiving a push-to-talk notification. Strangely, I'm able to play the audio file by tapping on a button before receiving the push notification, but it doesn't work afterward without any error messages. I've ensured that I've set up everything correctly in my project's capabilities. Any insights on what might be causing this issue would be greatly appreciated. I set everything in capabilities Set permission in .plist Request permission in app delegate I make connection to the room when app becomes active and received succes Then I setup .halfDuplex for this channel In restoredChannelUUID I activate AVAudioSession After sending the ppt push, I parse speaker and make it activeRemoteParticipant. I see than delegate function channelManager didActivate works good Where I tried to play audio from my player I see this prints in console, but no sound play
Posted
by dmitry225.
Last updated
.
Post not yet marked as solved
0 Replies
175 Views
In my application, I use CallKit and have supportsHolding = true set. During my phone call, another call comes in (e.g., GSM). I accept the incoming call and put the current call on hold. If I end the active call myself, everything is fine, and CallKit calls the method provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession). However, if the other party ends the call, the second call remains on hold. In the application, the user clicks on unhold, and I notify CallKit that the hold has ended. But in this case, the didActivate method is not called at all. If I try to activate the audio myself after unhold, I receive the error: Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} AVAudioSessionErrorInsufficientPriority == NSOSStatusErrorDomain Code: 561017449 What needs to be done for CallKit to activate my audio?
Posted Last updated
.
Post not yet marked as solved
0 Replies
175 Views
I am using the category - playAndRecord and mode - videoChat with options - duckOthers. But during an audio call when I try to call setActive(true) an exception occurs and when I try to setActive(true) again after the audio call ended, I am not getting any exception, but voice is not coming. Below is what I am trying to do. So once initial session active attempt fails the system is not activating the session. I have used AVAudioSession.interruptionNotification already but still its not setting the session as desired when audio call is ended. try session.setCategory(.playAndRecord, mode: .videoChat, options: .duckOthers) try session.setActive(true)
Posted Last updated
.
Post marked as solved
1 Replies
207 Views
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage: func startRecording() { let settings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 22050, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] do { recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings) recorder?.delegate = self recorder!.record() recording = true } catch { recording = false recordingFinished(success: false) } } The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16 A subsequent attempt to load the file into AVAudioPlayer results in: MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method. I've also tried constructing the recorder with let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1) if audioFormat == nil { print("Audio format failed.") } else { do { recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!) ... with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
Posted Last updated
.
Post not yet marked as solved
0 Replies
157 Views
I need to duck the audio coming from ApplicationMusicPlayer while playing a local file using AVAudioPlayer. I've tried using the duckOthers option as follows, but it doesn't work: let appAudioSession = AVAudioSession.sharedInstance() do { try appAudioSession.setCategory(.playAndRecord, mode: .default, options: .duckOthers) Maybe this is because there's one session for the entire app, and ApplicationMusicPlayer is using it? This is a fairly critical problem for my application, since Music content is always much louder than locally recorded content. Any insight appreciated.
Posted Last updated
.
Post not yet marked as solved
1 Replies
910 Views
I'd like to allow the speech synthesizer to play on the device speaker while simultaneously mixing with a phone call. I've worked with a number of different configurations but am unable to find a configuration that achieves the functionality I am trying to achieve - or allows mixing with a phone call at all. There is a flag: mixToTelephonyUplink that seems to suggest that at least some mixing with a phone call is possible using the speech synthesizer, but I'm currently unable to find almost any documentation about this flag besides basic API docs. I've had some some luck at least getting the synthesizer to always play to the speaker with the following audio session configuration - but the sound never is mixed with a phone call. Instead, it is ducked and muted while the phone call takes place. I've tried quite a few configuration combinations for the category and overrides, but nothings seems to work quite as I'd expect it to. synthesizer.mixToTelephonyUplink = true try? audioSession.setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .defaultToSpeaker]) try? audioSession.setActive(true, options: []) try? audioSession.overrideOutputAudioPort(.speaker) Is there some kind of documentation for this that's off the beaten path that I'm somehow missing? I'm going to continue with guess and check, but I'm starting to think this flag - and the functionality it implies, actually wasn't ever fully implemented.
Posted
by d-skinio.
Last updated
.
Post not yet marked as solved
0 Replies
159 Views
Background When I receive the InterruptionBegan notification (the interruption type is AVAudioSessionInterruptionTypeBegan) , I pause playing music. When I receive the InterruptionEnded notification (the interruption type is AVAudioSessionInterruptionTypeEnded), I resume playing music. however, sometimes i has got the error code: AVAudioSessionErrorCodeCannotInterruptOthers (560557684) Some Solutions I searched stackoverflow, there's some similar questions, and some solutions here are not very satisfying as : I don't want my app to mix with others, and once again, it all works most of the time. My app already uses remote control events so this doesn't solve anything. Questions 1.Have someone ever encountered this problem ? 2.Can we solve this problem and how ? 3.In addition, I noticed that there's property named otherAudioPlaying in AVAudioSession, we can know there's another app is playing,the quetion is if we can know which app is playing ?
Posted
by haitao_.
Last updated
.
Post not yet marked as solved
1 Replies
369 Views
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
Posted
by wmk.
Last updated
.
Post not yet marked as solved
1 Replies
250 Views
I'm trying to integrate Callkit into a Flutter app that uses webRTC for calls and I have an issue with taking calls on locked screen. CXAnswerCallAction requires to have the action.fulfill() method called after the connection is established. Here is a pice of code without waiting for establishment of the connection: guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{ action.fail() return } call.data.isAccepted = true self.answerCall = call self.callManager?.updateCall(call) sendEvent(SwiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON()) DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) { self.configureAudioSession() } action.fulfill() } This causes the connection time counter to be immediately visible on the screen, but the user still has to wait for connection establishment and can't hear anything. Here is the code that waits for the establishment of the connection before calling action.fulfill(): if(self.awaitedConnection.uuid != uuid) { action.fail() } else if(self.awaitedConnection.isConnected) { DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) { self.configureAudioSession() } action.fulfill() } else { DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1000)) { self.waitForConnection(uuid: uuid, action: action) } } } public func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) { guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{ action.fail() return } call.data.isAccepted = true self.answerCall = call self.callManager?.updateCall(call) self.awaitedConnection.uuid = action.callUUID self.awaitedConnection.isConnected = false sendEvent(wiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON()) waitForConnection(uuid: action.callUUID, action: action) } Unfortunately, though it works great on iOS 15.7, on 17.3 it causes lack of audio, no sound and no recording. I also can't enable it later when the call is ongoing. For reference: let session = AVAudioSession.sharedInstance() do{ try session.setCategory(AVAudioSession.Category.playAndRecord, options: AVAudioSession.CategoryOptions.allowBluetooth) try session.setMode(self.getAudioSessionMode(data?.audioSessionMode ?? "voiceChat")) try session.setActive(data?.audioSessionActive ?? true) try session.setPreferredSampleRate(data?.audioSessionPreferredSampleRate ?? 44100.0) try session.setPreferredIOBufferDuration(data?.audioSessionPreferredIOBufferDuration ?? 0.005) }catch{ print(error) } } I can see in the docs of action.fulfill() that "You should only call this method from the implementation of a CXProviderDelegate method". I this the reason for the issue? But how can I do it if I need to wait for the connection asynchronously and the provider method is synchronous?
Posted
by cand123.
Last updated
.
Post not yet marked as solved
1 Replies
352 Views
Hello, I’ve been trying to play system sounds in my app, but this hasn’t really been working. I am frequently switching between speech recognition (Speech framework) and sounds, so perhaps that’s where the issue lies. However, despite my best efforts, I haven't been able to solve the issue. I've been resetting the AVAudioSession category before playing a sound or starting speech recognition (as depicted in the code snippet below), to no avail. Has this happened to anyone else? Does anybody know how to fix the issue? recognizer = nil try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: []) try? AVAudioSession.sharedInstance().setActive(true) AudioServicesPlaySystemSound(1113) try? AVAudioSession.sharedInstance().setCategory(.record, mode: .spokenAudio, options: []) try? AVAudioSession.sharedInstance().setActive(true) recognizer = SpeechRecognition(word: wordSheet) recognizer!.startRecognition() Thank you.
Posted Last updated
.
Post not yet marked as solved
1 Replies
306 Views
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer. Let's say there are instances A and B of the Custom Player class. When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated. B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower. At the same time when A is terminated, B can complete the task without any impact on B. What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues. let audioMix = AVMutableAudioMix() var audioMixParameters: [AVMutableAudioMixInputParameters] = [] try composition.tracks(withMediaType: .audio).forEach { track in let inputParameter = AVMutableAudioMixInputParameters(track: track) inputParameter.trackID = track.trackID var callbacks = MTAudioProcessingTapCallbacks( version: kMTAudioProcessingTapCallbacksVersion_0, clientInfo: UnsafeMutableRawPointer( Unmanaged.passRetained(clientInfo).toOpaque() ), init: { tap, clientInfo, tapStorageOut in tapStorageOut.pointee = clientInfo }, finalize: { tap in Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release() }, prepare: nil, unprepare: nil, process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in var timeRange = CMTimeRange.zero let status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, &timeRange, numberFramesOut) if noErr == status { .... } }) var tap: Unmanaged<MTAudioProcessingTap>? let status = MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) guard noErr == status else { return } inputParameter.audioTapProcessor = tap?.takeUnretainedValue() audioMixParameters.append(inputParameter) tap?.release() } audioMix.inputParameters = audioMixParameters return audioMix
Posted
by koggo2.
Last updated
.