Good morning everyone,
I am developing a Flutter app for Android and iOS.
When I press a button, the app detects the location of the device (obviously with permissions already granted).
On Android everything works correctly.
On iOS, however, when I press the button for the first time after opening the app, the location is detected after about 30-50 seconds.
On the other hand, if I repeat the operation later, the response time is drastically reduced (only a few seconds).
I am using the location package (https://pub.dev/packages/location), and the code to get the location is as follows:
var currentLocation = await location.getLocation();
Has anyone experienced this problem before or knows how to solve it?
Thank you very much!
Federico
General
RSS for tagDive into the vast array of tools and services available to developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I observed the following problem:
A scientific code to perform Reverse Monte Carlo simulations (‘rmcxas’) compiled with the most up to date version of gcc/gfortran and Xcode including command line tools on Mac OS Sequoia 15.4 on a Macbook air with M4 processor generates the following problem upon starting it in a terminal:
dyld[10154]: dyld cache '(null)' not loaded: syscall to map cache into shared region failed
dyld[10154]: Library not loaded: /usr/lib/libSystem.B.dylib
Referenced from: <0144F82E-003C-37A9-A544-9AE6336E549B> /Users/markuswinterer/bin/rmcxas
Reason: tried: '/usr/lib/libSystem.B.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/lib/libSystem.B.dylib' (no such file), '/usr/lib/libSystem.B.dylib' (no such file, no dyld cache), '/usr/local/lib/libSystem.B.dylib' (no such file)
zsh: abort rmcxas
This occurs only about every 5th time the code is started.
Help would be highly appreciated.
Hello,
I recently enrolled in the Apple Developer Program and created an App ID with the bundle ID com.echo.eyes.voice.
I am trying to enable Speech Recognition in the App ID capabilities list, but the option does not appear — even after waiting over a week since my membership was activated.
I’ve already:
Confirmed my Apple Developer account is active
Checked the Identifiers section in the Developer portal
Tried editing the App ID, but Speech Recognition is not listed
Contacted both Developer Support and Developer Technical Support (Case #102594089120), but was told to post here for help
My app uses Capacitor + the @capacitor-community/speech-recognition plugin. I need the com.apple.developer.speech-recognition entitlement to appear so I can use native voice input in iOS.
I would really appreciate help from an Apple engineer or anyone who has faced this issue.
Thank you,
— Daniel Colyer
Hi,
I keep getting an error message when I try to publish my App made with Rork into Apple IOS.
Message reads
"Submission failed: Submission failed. Contact support. Don’t dump it on Rork — it won’t fix this. We still need a human."
Any help with this would be appreciated.
I am using Microsoft edge and have also used Google Chrome.
Topic:
Developer Tools & Services
SubTopic:
General
Hello Team!
Recently we cleaned up profiles and renewed certificates under developer account, noticing profile expiration date showing invalid, it supposed to show certificate expiration date. Due to this I am not able to update or download profiles. Any one experienced this this? what would be the solution?.
Thanks,
Kumar.
Topic:
Developer Tools & Services
SubTopic:
General
Hi everyone,
I'm developing a Capacitor plugin to display an RTSP video stream using MobileVLCKit on iOS. The Android side works perfectly, but I can’t get the iOS plugin to work — it seems my Swift file is not being detected or recognized, even though I’ve followed the official steps.
What works:
I followed the Capacitor Plugin Development Guide.
I implemented the Android version of the plugin in Java inside the android/ folder. Everything works perfectly from Angular: the plugin is recognized and calls execute correctly.
The issue on iOS:
I implemented the iOS part in Swift, using the official MobileVLCKit documentation.
I initially placed my RtspVlcPlugin.swift file in the plugin’s iOS folder, as the docs suggest.
Then I moved it directly into the main app’s ios/App/App/ folder next to AppDelegate.swift and tried manual registration.
The problem:
Even though I manually register the plugin with:
if let bridge = self.window?.rootViewController as? CAPBridgeViewController {
bridge.bridge?.registerPluginInstance(RtspVlcPlugin())
print("✅ Plugin RtspVlcPlugin registered manually.")
}
It prints the registration message just fine.
BUT from Angular, the plugin is not recognized: Capacitor.Plugins.RtspVlcPlugin has no methods, and I get this error:
"code":"UNIMPLEMENTED"
I also tried declaring @objc(RtspVlcPlugin) and extending CAPPlugin.
I’ve verified RtspVlcPlugin.swift is added to the target and compiled.
The Swift file doesn’t seem to register or expose any methods to Angular.
I even tried adding the code without using a plugin at all — just creating a Swift class and using it via the AppDelegate, but it still doesn't expose any callable methods.
My Swift code (RtspVlcPlugin.swift):
import Capacitor
import MobileVLCKit
@objc(RtspVlcPlugin)
public class RtspVlcPlugin: CAPPlugin, VLCMediaPlayerDelegate {
var mediaPlayer: VLCMediaPlayer?
var containerView: UIView?
var spinner: UIActivityIndicatorView?
@objc func iniciar(_ call: CAPPluginCall) {
guard
let urlStr = call.getString("url"),
let x = call.getDouble("x"),
let y = call.getDouble("y"),
let w = call.getDouble("width"),
let h = call.getDouble("height"),
let url = URL(string: urlStr)
else {
call.reject("Missing parameters")
return
}
DispatchQueue.main.async {
self.containerView?.removeFromSuperview()
let cont = UIView(frame: CGRect(x: x, y: y, width: w, height: h))
cont.backgroundColor = .black
cont.layer.cornerRadius = 16
cont.clipsToBounds = true
let sp = UIActivityIndicatorView(style: .large)
sp.center = CGPoint(x: w/2, y: h/2)
sp.color = .white
sp.startAnimating()
cont.addSubview(sp)
self.spinner = sp
self.containerView = cont
self.bridge?.viewController?.view.addSubview(cont)
let player = VLCMediaPlayer()
player.delegate = self
player.drawable = cont
player.media = VLCMedia(url: url)
self.mediaPlayer = player
player.play()
call.resolve()
}
}
@objc func cerrar(_ call: CAPPluginCall) {
DispatchQueue.main.async {
self.mediaPlayer?.stop()
self.mediaPlayer = nil
self.spinner?.stopAnimating()
self.spinner?.removeFromSuperview()
self.spinner = nil
self.containerView?.removeFromSuperview()
self.containerView = nil
call.resolve()
}
}
public func mediaPlayerStateChanged(_ aNotification: Notification!) {
guard let player = mediaPlayer,
player.state == .playing,
let sp = spinner else { return }
DispatchQueue.main.async {
sp.stopAnimating()
sp.removeFromSuperview()
self.spinner = nil
}
}
}
In the Angular project, I’m using the plugin by manually registering it with registerPlugin from @capacitor/core. Specifically, in the service where I need it, I do the following:
import { registerPlugin } from '@capacitor/core';
const RtspVlcPlugin: any = registerPlugin('RtspVlcPlugin');
After this, I try to call the methods defined in the iOS plugin, like RtspVlcPlugin.iniciar({ ... }), but I get an UNIMPLEMENTED error, which suggests that the plugin is not exposing its methods properly to the Angular/Capacitor environment. That makes me believe the problem lies in how the Swift file is integrated or registered, rather than how it is used from Angular.
I’d appreciate any guidance on how to properly expose a Swift-based Capacitor plugin’s methods so that they are accessible from Angular. Is there any additional registration step or metadata I might be missing on the iOS side?
I can see that a MacOS VM guest running on top of an Apple Silicon MacOS host has GPU acceleration - indicating GPU sharing capabilities for the hardware.
Is there also a way to have GPU acceleration in Linux guests (with Vulkan/Mesa drivers)?
I'm trying to rewrite an old AppleScript mail rule that I used extensively as a Mail extension using the MailKit framework and I've run into an issue.
Previously, when developing the script, it was possible to debug it by selecting the message I wanted it applied to and choosing the Mail.app menu item "Message/Apply Rules"
This would re-execute my script and I could iterate over it as many times as I liked while developing.
I haven't found any great way of doing this for my extension with a MEMessageActionHandler. The closest I've found is to forward the message to myself and wait for it to come back in again over the internet, at which point the extension would get executed again. Needless to say, this makes debugging my MEMessageAction handler much slower.
I've tried a number of things in Mail.app to try and get it to re-execute my extension with a particular message without any luck. Does anyone know of a good process for debugging a MEMessageActionHandler that doesn't involve forwarding the message to myself over and over and waiting for it to come in each time?
Topic:
Developer Tools & Services
SubTopic:
General
When the power button is pressed to turn off the alarm while the screen is locked, stopIntent will not be called.
Topic:
Developer Tools & Services
SubTopic:
General
Hi,
I have an installer package that runs a postinstall script. The script can take a long time to complete, as one thing it does is copy about 10-30 GB of files using the rsync tool.
We noticed on macOS 15 that the installer would fail almost exactly 10 minutes after it started. Looking in the /var/log/install.log, I see a message like this:
2025-07-01 12:54:32-07 Work-M1 package_script_service[21562]: PackageKit: Terminating PKInstallTask(pid:21573). Task has exceeded its 600 seconds of runtime.
This does not happen in my testing on macOS 12 (Monterey)
I have a few questions about this:
A) Is this documented, and which OS introduced this?
B) Is there a way a developer can extend or disable the time limit via a setting in the installer package. Or if not, is there a way end end user can disable it temporarily on their system?
Thanks, Andrew
Topic:
Developer Tools & Services
SubTopic:
General
OS:macOS15.5
CPU:Apple M1 Pro
zsh终端中执行python或pip命令,提示未找到命令,但执行python3或pip3命令,预期也是提示未找到命令,实际结果弹出Install Command Line Developer Tools弹窗安装,网上查阅资料,删除/usr/bin/python3、/usr/bin/pip3、/usr/local/bin/python3、/usr/local/bin/pip3文件即可达到预期,但无权限删除/usr/bin/python3与/usr/bin/pip3文件,尝试过root账号、进行系统恢复模式暂时禁用SIP解决方案,都无法解决;🙏大佬指点一二;
Topic:
Developer Tools & Services
SubTopic:
General
My experience with Swift 6 strict concurrency so far doesn't match my understanding of implicit MainActor isolation semantics.
This is a cross-post from StackOverflow. I will link answers between both forums.
TL;DR
Build succeeds when testing a struct declared in the test module, but fails when the struct is moved to the main module:
Main actor-isolated property … cannot be accessed from outside the actor.
Steps to reproduce
Open up Xcode 26 beta 2 on macOS 26 (probably also ok on current stables).
Create a new Swift app with Swift testing, no storage. Call it WhatTheSwift.
Set the Swift Language Version on all three targets to Swift 6.
Update the default test file to be this:
import Testing
@testable import WhatTheSwift
struct WhatTheSwiftTests {
@Test func example() async throws {
let thing = Thing(foo: "bar")
#expect(thing.foo == "bar")
}
}
struct Thing {
let foo: String
}
That should build fine, and the tests should pass.
Now, move the Thing declaration into its own Thing.swift file in the WhatTheSwift module, and try running the test again. You should see this:
Observations
Marking the test @MainActor allows the test to pass, suggesting the compiler actually wants to isolate Thing.foo to the main actor.
My question
Why? And why only when Thing is in a different module?
I want to test different types of journaling suggestions, like for example photo/video containing ones but as a developer I have no control over what's shown in the Journaling Suggestions sheet. And this makes it really difficult to develop or test any suggestion type other than "reflection". Please let us have a fake suggestions sheet that includes all kinds of suggestion types so we can test. We could toggle it under Developer settings I think.
Topic:
Developer Tools & Services
SubTopic:
General
Looking for a dynamic table that displays the latest supported CLI versions with the version of macOS.
Specifically, is CLI 15.3 supported on Ventura 13.7.8?
More generally, what is the lastest version of CLI supported on macOS <version_goes_here>
Is this a valid thing to include in the Info.plist file?
If so is a category of public.app-category.astronomyvalid? I couldn't find that, but the categories I did find seemed very limited.
Hi Guys,
I want to support my client for enable the developer mode, But they not accept to connect with any other devices(Mac Xcode) to enable developer mode.
They are nearly 10 people to enable developer mode. But I think without mac we can't enable developer mode in some of devices. So I need a clarification with IOS versions. That's only we are excepting to list out which IOS versions don't have developer mode option default. Please list out that IOS versions
Like below:
default developer mode available IOS 17.4.1
default developer mode not available IOS 17.5.1
Is anyone seeing flaky results when using parameterized test with pairs of (input, result) data?
I have several (5) tests for a given method. I create a zip sequence by zip([in1, in2, in3, in4, in5],[out1, out2, out3, out4, out5])
Sometimes, the test only runs 4 of the tests and fails to report a failure even though I deliberately place data that should cause a failure.
Sometimes, even though I only select one test to run, the test explorer goes crazy into a loop and I have to clear test results to get it to stop. Following a suggestion, I disabled running tests in parallel.
Xcode 16.2 / OSX 14.7 (Sonoma) / Mac mini M2 Pro
Hello,
currently I am having trouble releasing an app because it crashes/does not launch on iOS 26.0.1. We have uploaded apps in the past so I tried building one of them with our current toolchain. I use Xcode 16.4, Kotlin version 2.0.0, LibGDX 1.13.1 and robovm/MobiVM 2.3.23. I uploaded the build to TestFlight and tested with physical devices running iOS 18.5 and 26.0.1. It runs fine on 18.5 but refuses to launch on the 26.0.1 device. I cannot retrieve a crash log or .ips file because none is written. When I write a Console log while the app crashes/does not launch I get no hints as to why it does so.
Do you maybe have additional ideas as to why it keeps not launching on iOS 26.0.1?
Topic:
Developer Tools & Services
SubTopic:
General
My Xcode project has the following configuration:
1 iOS app target
1 Xcode framework target (mach-o-type "Dynamic Library")
5 static libraries
Dependencies:
All the static libraries are target dependencies of the framework.
The framework is the only target dependency of the iOS app.
For the iOS app target, within the General tab > Frameworks, Libraries & Embedded content, I've set the framework as "Do not embed"
So now I have a dynamic framework which won't be copied to the .app bundle in the build output.
As per my understanding, this should result in a runtime error, dyld should not be able to find the framework files as they were not embedded in the final .app bundle.
But regardless, my app runs without any errors, using all the methods exposed by the framework.
What is the correct understanding here?
What exactly does Embed/Do not embed mean (apart from excluding the files from .app bundle)
When both settings are specified, is there any priority or precedence of one setting over the other?
I am working on a Swift package which uses CoreAudio, and includes some tests in a testTarget which use the Testing framework, and a couple of executableTarget targets which exercise the same code. I'm using Xcode 16.2 on macOS 15.3.1.
One of the things I do in the test code is create a HAL plugin, then find that plugin using the kAudioHardwarePropertyTranslateUIDToDevice.
Finding the plugin that I just created always fails from within a Swift Testing test, unless I run the test which creates the plugin individually first, then separately, run the test which finds the plugin, by clicking on the little arrows next to the function names.
If I put the tests in a serialized suite (so creation always happens first, then finding), running the suite always fails - it creates the plugin, but can't find it. If I run the 'find my plugin' test again manually, it is always found.
If I call the same functions from a regular executable (the thing created by a "executableTarget" in my .package.swift file), the just-created plugin is always found.
Is there a way to mimic the runtime environment of a regular executable in a Swift Testing target, or am I misunderstanding something?
this my be related to this issue: https://github.com/swiftlang/swift/issues/76882 but I don't understand it well enough to be sure.