Welcome to the Apple Developer Forums

Post your questions, exchange knowledge, and connect with fellow developers and Apple engineers on a variety of software development topics.

For questions about using Apple hardware and services, visit Apple Support Communities

Posts

Sort by:
Post not yet marked as solved
0 Replies
1 Views
Using the DeviceActivity framework we are able to display data based on a user's screentime and device usage. With the DeviceActivityFilter property, you can specify the date interval to collect data between. In testing, it seems that data only becomes accessible once the extension has been installed (so the extension isn't reading the screentime data already collected on device). However, once installed, I'm curious how far back you can query data from in the data interval? Opal which uses the Screentime API appears to have a lifetime Screentime metric, so hypothetically it should possible to query data as far back as collection starts. Unless they are getting around the sandbox environment and storing the data somehow. Side note on Opal -- They seem to have a community average of Screentime among people in the same age group. Does anyone know how they are collecting the data for this average? Is it actually using live Screentime data or just aggregating data from other studies?
Posted
by
Post not yet marked as solved
1 Replies
15 Views
anyone else notice this greek letter misspelled in the keycaps app? i thought i was spelling it wrong but maybe not?
Posted
by
Post not yet marked as solved
0 Replies
12 Views
The options to control how files open in tabs are confusing... How do I change the settings so that when I click or double click files open in a new tab. And by tab (apparently now I have to distinguish which type of tab), I mean the tabs are part of the editor area. I DONT WANT THESE TABS I WANT EVERY FILE TO OPEN IN A SEPARATE TAB HERE MY SETTINGS - I've tried several combinations here, no luck
Posted
by
Post not yet marked as solved
0 Replies
11 Views
Hello everyone, this is my first post. I have a question, I understand that it is possible to generate nfc passes and add them to the apple wallet after obtaining the Apple certificate. Apple asks which physical reader is compatible for reading the pass, but is it possible to use an Android or iOS mobile application to read the pass ? Have a nice day. Kind regards,
Posted
by
Post not yet marked as solved
0 Replies
10 Views
Hi! I won the Swift Student Challenge this year, and I wonder when the awards arrive (so hyped for that!) as I see some unboxing videos and posts on the Internet. Any clues ? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
21 Views
We are in app store review hell. The current iOS platform is 11 versions behind. Every time it gets to the app store review 2 things either happen. They don't tap the 'dev' environment given by clear instructions and video. They exhaust the SMS login limit by trying to login over 15 times in 2 minutes, getting the device blocked. So we get 'reset' Not sure what to do now, this is extremely annoying.
Posted
by
Post not yet marked as solved
0 Replies
12 Views
I recently spent several days messing around in Reality Composer with the intention of creating a course that we could teach to students to get them started learning how to use Augmented Reality to tell stories and play with digital assets. We would use it in combination with other apps like TinkerCad to teach them modeling, the Voice Memo recorder so they can record dialogue and interaction sounds, iMovie to edit a demo reel of their application, as well as taking advantage of online assets libraries like Sketchfab that does have .usdz files and even some animations available for free. The focus would be on creating an interactive application that works in AR. Here are some notes I took while trying things out. UI Frustrations: The behaviors tab doesn’t go up far enough, I’d like to be able to drag it to take up more space on the screen. Several of the actions have sections that go just below the edge of the screen, and it’s frustrating to have to constantly scroll in order to see all the information. I’ll select an “Ease Type” on the Move, Rotate, Scale To action and buttons will appear on the very edge of my screen in such a way that I can’t read them until I scroll down. This happens for so many different Actions that it feels like I don’t have enough space to see all the necessary information The audio importing from the content library is difficult to navigate. First, I wish there was a way to import only one sound instead of having to import the entire category of sounds. Second, it would be nice to see all the categories of sounds in some kind of sidebar, similar to the “Add Object” menu that already exists. I wish there was a way to copy and paste position and rotation vectors easily so we could make sure objects are in the same place, especially if we need to duplicate objects in order to get a second tap implementation. Currently you have to keep flipping back and forth between objects to get the numbers right Is there a way to see all of the behaviors a selected object is referenced in? Since the “Affected Objects” list is inside all sorts of behaviors, actions, triggers, etc, it can be hard to find exactly where a behavior is coming from, especially if your scene has a lot of behaviors. I do come from a Unity background, so I’m used to object behaviors being put directly onto the object itself, so to not know which behaviors are referencing any given object makes it possible to accidentally have my physics response overwritten by an animation triggered from somewhere else and then causing me to search for it through all of the behaviors in my scene. Is there a way to see the result of my object scanning? Right now it’s all sort of behind the scenes, and it feels like the object scanning doesn’t work unless the object is in the same position in relation to the background as it was before. It’s a black box and hard to understand what I’m doing wrong with the scanning, cuz when I move the object everything stops working. I could use a scene Hierarchy or list of all objects in a scene. Sometimes I don’t know where an object is but I know what it is called, and I’d like to be able to select it to make changes to it. Sometimes objects start right on top of eachother in the scene (like for a reset button for a physics simulation), which makes it frustrating to select one of them over the other, especially since it seems that the only way to select “affected objects” is to tap on them, instead of choosing from a list of those available in the scene. Feature Requests: One thing other apps have that makes it easy to add personality into the scene is characters that have a variety of animations they can play depending on context. It would be nice to have some kind of character creator that came with a bunch of pre-made animations, or at least some kind of library of characters with animations. So for example, if we want to create a non-player character that waves to the player, then moves somewhere else, and talks again, we can switch the animation of the character at the appropriate parts of the movement to make the character feel more real. This is a little more difficult to do with usdz files that only play one animation, although the movement is cool it typically only fits in one setting, so you have to juggle turning a bunch of objects off and on, even if you do find an importable character with a couple of animations (such as you might find on mixamo in .fbx format). Although I believe it may be possible for a .usdz file to have more than one animation in it? I haven't seen any examples of this. Any chance we’ll have a version of the Reality Converter app that works on iPads and iPhones? We don’t want to assume our students have access to a macbook, and being able to convert fbx files or obj files would open up access to a wider variety of online downloadable assets. Something that would really help make more complex scenes is the ability to make a second trigger for an object that relies on a condition being met first. The easiest example is being able to click on a tour guide a second time in order to move onto the next object. This gets a little deeper into code blocks, but possibly there could be an if block or a condition statement that checks if something is in proximity before allowing a tap, or checks how many times the object has been tapped by storing it in an integer variable you could set and check the value of. The way I first imagined it, maybe you’d be able to add a Trigger that enables AFTER the first action sequence has completed, so you can build longer chains. This also comes into play with physics interactions. Let’s say I want to click on a ball to launch it, but when it stops moving at a speed greater than some number I want it to automatically reset. I’d like the ability to make one object follow another one with a block, or some kind of system that’s similar to “parenting” objects together like you can in a 3D engine. This way, you could separate the visuals of an object from its physics, letting you play animations on launched objects, spin them, emphasize them, while still allowing the physics simulation to work. For Physics simulations, is it possible to implement a feature where the direction of force is pointing towards another object in the scene? Or better yet, away from them using negative values? Specifically, I’d like to be able to launch a projectile in the direction the camera is facing, or give the user some control over the direction during playtime. Would be nice to edit the material or color of an object with an action, give the user a little pulse of color when they tap as feedback, or even allow the user to customize their environment with different textures If you want this app to be used in education, there must be a way for teachers to share their built experiences with eachother, some kind of online repository where you can try out what others have made.
Posted
by
Post not yet marked as solved
0 Replies
8 Views
I am developing an application where I have an active ARSession. This session has the ARWorldTrackingConfiguration active, in which I am setting planeDetection to one of the modes, or both as [.horizontal, .vertical]. This is done via a button press and a dropdown menu. Dropdown menu chooses between horizontal, vertical or everything and button toggles the plane tracking entirely (sets planeTracking = []) When mode change or toggle happens, I want to print a list of anchors correctly to console because I will use the list of anchors later on. When a mode change occurs, I am performing the steps down below inside a switch: switch(planeTrackingMode) { case .horizontal: configuration.planeDetection = [.horizontal] for anchor in (planeAnchors ?? []) { if((anchor as? ARPlaneAnchor)?.alignment == .vertical) { session?.remove(anchor: anchor) } } case .vertical: configuration.planeDetection = [.vertical] for anchor in (planeAnchors ?? []) { if((anchor as? ARPlaneAnchor)?.alignment == .horizontal) { session?.remove(anchor: anchor) } } case .horizontalAndVertical: configuration.planeDetection = [.horizontal, .vertical] default: configuration.planeDetection = [] for anchor in (planeAnchors ?? []) { if(anchor is ARPlaneAnchor) { session?.remove(anchor: anchor) } } } session?.run(config, options: []) Button toggle just performs the same steps as default case. The issue here is that after disabling or removing vertical anchors, previous anchors are still in the memory with same UUIDs. Even if I disable via button, turn the phone in a way that the camera can't see the vertical plane, and then turn the tracking back on the vertical plane is still there. When I put a log after deletion, list of anchors seems fine. But when I put a print in session(_ session: ARSession, didUpdate anchors: [ARAnchor]) delegate function, I see that none of the previous vertical plane anchors were deleted, and some of them start spinning and moving away from their original position slowly, then their positions turn into NAN values. I tried getting the existing anchors list both with currentFrame.anchors and session.getCurrentWorldMap function. Results are the same. There is no ARSceneView or anything, just an ARSession, its camera and a Metal view to render things. Even if I go for session?.run(config, options: [.removeExistingAnchors]), which should remove all anchors, this behaviour is same. If ARSession is doing some cache magic, it seems like it is doing it wrong. If anyone can replicate this, please inform me if this is a bug or not. Behaviour is the same on iPhone 12 Pro Max and iPad Pro 6th Gen, with both iOS 15.* and 16.* versions.
Posted
by
Post not yet marked as solved
0 Replies
9 Views
Hello Apple Developer Community, I am currently exploring the Screen Time API and its potential for creating a parental control-style application. However, I have a slightly different use case in mind that I need some guidance on. I am wondering if it is possible to use the Screen Time API to create a monitoring application for a consenting adult, instead of the traditional parent-child scenario provided by the Family Sharing setup. For example, if a friend of mine wants to apply an internet filter on their iPhone and have me monitor it, can I create an app to do so? To elaborate, I'm envisioning a setup similar to how the 'Find My Friends' app allows us to locate our friends (who are not necessarily a part of our family) after obtaining their consent. Is it possible to leverage the capabilities of the Screen Time API to create a 'monitoring' app on my device that can track and control aspects of my friend's device usage without having to engage the Family Sharing and Family Controls frameworks? I understand that privacy and consent are paramount in such a situation, but this is a scenario between two consenting adults. This is purely for the purpose of assisting my friend in managing their digital habits more effectively. I appreciate any insights or advice that the community can provide on this topic. Thank you in advance!
Posted
by
Post not yet marked as solved
1 Replies
23 Views
This may be a bit high level, but wanted some clarification as I can't be sure from reading the documentation. I am considering adding paid subscriptions into my app at some point in the future. Initially, though, I would like the app to simply be free to use. Even though I won't be charging subscriptions right away, would it make sense to implement the mechanics for subscriptions into the app right away for each user (and simply have the subscription be $0/month, for example)? Or will it be easy enough in the future to add the option for a paid subscription and integrate it seamlessly with existing users? To provide some additional context, I don't want to implement any sort of custom account creation / registration logic into my app (i.e. like on a social media where each user creates a username and password) - I would simply like to be able to utilize Apple's logic for Apple IDs and such to eventually handle payments and subscriptions. Is this something I should reconsider?
Posted
by
Post not yet marked as solved
1 Replies
26 Views
Hello, I work with the Poetry Festival of Medellín, which is an annual one week event with poets from all around the world doing readings throughout the city. We would like to create and app to serve as a guide for the reading dates/locations and info about poets and their poems. Is such a kind of app elegible for distribution through the App Store? Thanks.
Posted
by
Post not yet marked as solved
0 Replies
22 Views
I'm currently developing an app that requires detecting Bluetooth connections and disconnections in cars. During testing, I've observed the following behavior: In certain vehicles, only a Bluetooth connection via the car's hands-free system is available. In these cases, the device initiates a call to itself, which is then displayed on the vehicle's infotainment system. In some of the tested vehicles, this self-call is brief and only occurs during the device's connection or disconnection process. However, in other vehicles, the self-call remains visible throughout the entire duration of the device's pairing with the car's Bluetooth system. This blocked call blocks the entire infotainment system and causes the connection/disconnection observers in my app to stop functioning as expected. I'm looking for a solution or preventative measures to address this issue. Any guidance would be greatly appreciated. Here is a snippet of my code: `func audioSessionSetup() { do { resetAudioSession() let audioOptions: AVAudioSession.CategoryOptions = [.duckOthers, .allowBluetooth, .defaultToSpeaker] try audioSession.setCategory(.playAndRecord, mode: .spokenAudio, options: audioOptions) registerNotifications() try audioSession.setActive(true) print("audioSession is active") } catch let error as NSError { print("Failed to set the audio audioSession category and mode: \(error.localizedDescription)") } } /// Reset the audio session to deactivate it. func resetAudioSession() { do { try audioSession.setActive(false, options: .notifyOthersOnDeactivation) } catch let error as NSError { print("Failed to reset the audio audioSession, error: \(error.localizedDescription)") } } @objc func handleRouteChange(_ notification: Notification) { guard let userInfo = notification.userInfo, let reasonValue = userInfo[AVAudioSessionRouteChangeReasonKey] as? UInt, let reason = AVAudioSession.RouteChangeReason(rawValue: reasonValue) else { return } switch reason { case .newDeviceAvailable: /// Handle new device connection print("New device connected.") checkConnectionForSelectedOutput(notification) case .oldDeviceUnavailable: /// Handle device disconnection print("Device disconnected.") handleLocationServices(state: false) default: print("break") handleCategoryChange(notification) break } } private func handleCategoryChange(_ notification: Notification) { if let connectedDeviceName = getConnectedBluetoothDeviceName() { if connectedDeviceName != connectedDevice && connectedDeviceName == BluetoothUtils.getBluetoothInfo().portName { connectedDevice = connectedDeviceName checkConnectionForSelectedOutput(notification) } } else { audioSessionSetup() checkConnectionForSelectedOutput(notification) print("handleRouteChange audio session is active") } }`
Posted
by
Post marked as solved
2 Replies
22 Views
I would like to now how to make a button that copy a text. Exemple : When I touch this button the text/link apple.com will be copied. The user can now paste the text/link in safari.
Posted
by
Post not yet marked as solved
0 Replies
31 Views
When trying to use 'modern' collection views (DiffableDataSource + CompositionalLayout + list sections) I've found a bug that is unique to iOS 16 it seems. When trying to reorder cells on a pgesheet or on a formsheet, (essentially any non-fullscreen presentation mode), the cells will start to disappear and behave unpredictably. For example, if you download a source code from https://developer.apple.com/documentation/uikit/views_and_controls/collection_views/implementing_modern_collection_views and just change line OutlineViewController:191 from navigationController?.pushViewController(viewController.init(), animated: true) to self.present(viewController.init(), animated: true) you will see that at some point when reordering, some cell will disappear. When trying to implement Drag & Drop manually, the same thing happens (and it seems that it happens even more often). I've found an example of custom implementation of reorder on Github and everything worked perfectly on fullscreen, but on pagesheet the same bug happens. Is there any workaround or Drag & Drop is impossible on modal screen completely as of iOS 16? The only way I know to implement reorder is to downgrade to UITableView that has no such bug.
Posted
by
Post not yet marked as solved
0 Replies
25 Views
Hello, is there any offical and always actual URL where I can get a list of all released OS versions (iOS, macOS, iPadOS etc) with release date as XML or JSON or other machine readable file format? I need this list to manage and monitor our corporate managed devices in different update rings. Thx for help.
Posted
by
Post not yet marked as solved
0 Replies
21 Views
I'm experimenting with getting my AUv3 plugins working correctly on iOS and MacOS using Catalyst. I'm having trouble getting the plugin windows to look right in Logic Pro X on MacOS. My plugin is designed to look right in Garageband's minimal 'letterbox' layout (1024x335, or ~1:3 aspect ratio) I have implemented supportedViewConfigurations to help the host choose the best display dimensions On iOS this works, although Logic Pro iPad doesn't seem to call supportedViewConfigurations at all, only Garageband does. On MacOS, Logic Pro does call supportedViewConfigurations but only provides oversized screen sizes, making the plugins look awkward. I can also remove the supportedViewConfigurations method on MacOS, but this introduces other issues: I guess my question boils down to this: how do I tell Logic Pro X on MacOS what the optimal window size of my plugin is, using Mac Catalyst?
Posted
by
Post not yet marked as solved
2 Replies
70 Views
Hi All, We've been waiting for this to work but until now AppStoreConnect still prompting this "Something went wrong. Try again later.", its been 12hours since we noticed this prompt. Have you experienced this? Do you have any solution? or the "Sandbox Testers" dashboard is on maintenance? Does this affect the [Sandbox]InApp Purchases and Subscriptions?
Posted
by
Post not yet marked as solved
0 Replies
14 Views
When i try to install podfiles after this command $ sudo gem install cocoapods its getting zsh command not found mac OS 13.4 i try some commands but no luck $ sudo gem install -n /usr/local/bin cocoapods $ sudo gem install cocoapods -V $ gem update --system
Posted
by
Post not yet marked as solved
0 Replies
21 Views
We have a requirement where we want to keep user in the browser until he completes the onboarding. Is there any way to keep the user in browser even if app is installed. Issue is once app will come in foreground and then we can manually check and open the same link in browser but in that case app will open and go back to browser will give a bad user experience.
Posted
by

Pinned Posts

Categories

See all