Post

Replies

Boosts

Views

Activity

tvOS Accessibility: How to enable accessibility focus on static text and custom views
Hi guys, I'm trying to add accessibility labels to a static text and custom SwiftUI views. Example: MyView { ... } //.accessibilityElement() .accessibilityElement(children: .combine) //.accessibilityRemoveTraits(.isStaticText) //.accessibilityAddTraits(.isButton) .accessibilityLabel("ACCESSIBILITY LABEL") .accessibilityHint("ACCESSIBILITY HINT") When using 'voiceover' or 'hover text' accessibility features, focus moves only between active elements and not on static elements. When I add .focusable() it works, but I don't want to make those elements focusable when all accessibility features are off. I suppose I could do something like this: .focusable(UIApplication.shared.accessibility.voiceOver.isOn || UIApplication.shared.accessibility.hoverText.isOn) Note: this is just pseudocode, because I don't remember exactly how to detect current accessibility settings. However using focusable() with conditions on hundreds of static texts in an app seems to be overkill. Also the accessibility focus is needed on some control containers where we already have a little more complex handling of focus with conditions in focusable(...) on parent and child elements, so extending it for accesssiblity seems to be too complicated. Is there a simple way to tell accessiblity that an element is focusable specifically for 'hover text' and for 'voiceover'? Example what I want to accomplish for TV content: VStack { HStack { Text(Terminator) if parentalLock { Image(named: .lock) { } .accessibilityLabel(for: hover, "Terminator - parental lock") Text("Sci-Fi * 8pm - 10pm * Remaining 40 min. * Live") .accessibilityLabel(for: hover, "Sci-Fi, 8 to 10pm, Remaining 40 min. Broadcasting Live") } .accessibilityLabel(for: voiceover, "Terminator, Sci-Fi, 8 to 10pm, Remaining 40 min. Broadcasting Live, parental lock")``` I saw all Accessibility WWDC videos 2016, 2022, 2024 and googling it for several hours, but I coudln't find any solution for static texts and custom views. From those videos it appears .accessibilityLabel() should be enough, but it clearly works only on actvie elements and does not work for other SwiftUI views on tvOS without focusable(). Can this be done without using focusable() with conditions for detection which accessibility feature is on? The problem with focusable would be that for accessibility I may need to read a text for parent view, but focus needs to be placed on a child element. I remember problems when focusable() is set on parent view that child was not focusable or something like that - simply put: complications in focus logic. Thanks.
1
0
419
Nov ’24
AVPlayer and TLS 1.3 compliance for low latency HLS live stream
Hi guys, I'm investigating failure to play low latency Live HLS stream and I'm getting following error: (String) “<AVPlayerItemErrorLog: 0x30367da10>\n#Version: 1.0\n#Software: AppleCoreMedia/1.0.0.21L227 (Apple TV; U; CPU OS 17_4 like Mac OS X; en_us)\n#Date: 2024/05/17 13:11:46.046\n#Fields: date time uri cs-guid s-ip status domain comment cs-iftype\n2024/05/17 13:11:16.016 https://s2-h21-nlivell01.cdn.xxxxxx.***/..../xxxx.m3u8 -15410 \“CoreMediaErrorDomain\” \“Low Latency: Server must support http2 ECN and SACK\” -\n2024/05/17 13:11:17.017 -15410 \“CoreMediaErrorDomain\” \“Invalid server blocking reload behavior for low latency\” -\n2024/05/17 13:11:17.017 The stream works when loading from dev server with TLS 1.3, but fails on CDN servers with TLS 1.2. Regular Live streams and VOD streams work normally on those CDN servers. I tried to configure TLSv1.2 in Info.plist, but that didn't help. When running nscurl --ats-diagnostics --verbose it is passing for the server with TLS 1.3, but failing for CDN servers with TLS 1.2 due to error Code=-1005 "The network connection was lost." Is TLS 1.3 required or just recommended? Refering to https://developer.apple.com/documentation/http-live-streaming/enabling-low-latency-http-live-streaming-hls and https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis Is it possible to configure AVPlayer to skip ECN and SACK validation? Thanks.
1
1
711
May ’24
Vision Pro not working on railways
Hi guys, has anyone tried using Vison Pro on train? I was getting "Tracking lost" or "Tracking unavailable" message (don't remember precisely). I could not quite get even Home screen. Home screen was kind of shaky and then as train was moving the Home screen went sideways. I could not make video when looking out of the window, again the same error message. I was trying to look inside, so that there was minimal movement detected by the device, there were no people in front of me, just empty seats, so I was expecting that Vision Pro should be able to lock on the surrounding space, but without any success. I managed to start one app I work on and I started watching movie, but the screen was in place for 30 seconds or so, then started moving around a little bit and then moved sideways flew out of the window and zipped past me and stayed somewhere behind on the track. Is it possible to switch Vision Pro into a regime, where it could ignore surroundings? Not sure if perhaps Airplane mode could help, but it was very diffcult to even open home screen, settintgs or control center, then I got the error message. It should be relatively simple algorithm to detect if let's say 70% of surroundings is roughly in place and ignore moving scene (like landscape passing in the window). Apple, please could you fix it or a provide hint within "Tips" how make Vision Pro work inside moving vehicles, if this is already possible? It would be a great Vision Pro usability, if I could watch movies when traveling and then at home do something meaningful, like taking a nap. Thanks.
2
0
1.1k
Mar ’24
Vision Pro - lets join forces to improve VisionOS platform
Hi guys, if you started using Vision Pro, I'm sure you already found some limitations. Let's join forces and make feature requests. When creating Feedback, request from one guy may not get any attenption from Apple, but if we join and more of us make the same request, we might just push those ideas through. Feel free to add your ideas and don't forget to create feedback: app windows can only be moved forward to a distance of about 20ft/6m. I'm pretty sure some users would like to push window as far as a few miles away and make the window large to be still legible. This would be very interesting especialy when using Environments and 360-degree view. I really want to put some apps up on the sky above the mountains and around me, even those iOS apps just made compatible with Vision Pro. when capturing screen, I always get message "Video capture not possible due to insufficient lighting". Why? I have Environment loaded and extended 360 degrees with some apps opened, so there is no need for external lighting (at least I thing it's not needed). I just want to capture what I see. Imagine creating tutorials, recording lessons for learning various subjects, etc. Actual Vision Pro user might prefer loading their on environments an setup app in spatial domain, but for those that don't have it yet or when creating videos to be available on antique 2D computer screens , it may be useful to create 2D videos this way. 3D video recording is not very good, kind of shaky, not when Vision Pro is static, but when walking and especially when turning head left/right/up/down (even relatively slowly). I think hardware should be able to capture and create nice and smooth video. It's possible that Apple just designed simple camera app and wants to give developers a chance to create a better Camera app, but it still would be nice to have something better out of the box. I would like to be able to walk through Environments. I understand safety of see-through effect, so users didn't hit any obstacles, but perhaps obstacles could be detected and when user gets to 6ft/2m from obstacle then it could present at first warning (there is already "You are close to and object" and then make surroundigns visible, but if there are no obstacles (user can be located in large space and can place a tape or a thread around the safe area), I should be able to walk around and take a look inside that crater on the Moon. We need Environments, Environments, Environments and yet more of them, I was hoping for hundreds, so we could even pick some of them and use in our apps, like games where you want to setup a specific environment. Well, that's just a beginning and I could go on and on and on, but tell me what you guys think. Regards and enjoy new virtual adventure! Robert
5
0
1.2k
Mar ’24
Handling tvOS Siri remote + iOS remote + Nimbus/PS
Hi guys, Since SwiftUI is not completely suporting tvOS remote and swipe/pan gestures (targeting tvOS 16 and higher, using Xcode 15.1 and its tvOS SDK), I have implemented custom remote control handling using GameController framework and GCController class to detects Siri Remote (1st and 2nd generation), iOS Remote (tvOS remote controller on iPhone) and Nimbus+ and PS game controllers. For video streaming app I specifically needed to handle arrows on old Siri remote and new Siri remote the same way - arrow press (not just touch) switches TV channel. In the standard SwiftUI remote control handling the arrow presses are triggered on old remote just by touching edges of the touch pad. We use Up/Down arrow presses to switch TV channel. Our testers reported that they too often accidentally change channel when picking up remote or simply laying down finger on the pad in preparation to use the remote for some action. That's why we needed to override default behavior on old remote and detect arrow press only when user touches the edge and "clicks" the touch pad. Simultaneously we detect input from all controllers, because for example when you have Nimbus game controller connected it becomes current controller - it is set in GCController.current - while we still need to handle Siri remote, so user didn't have to turn off Game controller and could just pick up the Siri remote and use it right away. But there are still some problems. For example if I open tvOS remote control app on iPhone it has higher priority over Siri remote. Although I'm able to hold iPhone in one hand and Siri remote in the other and use them simulatanously, pressing down arrow buttons on Siri remote triggers event, which is being detected as if it came from iOS remote. This problem occurs even if user locks iPhone while remote app is active. User has to unlock iPhone and minimize iOS remote to fix the problem and make Siri remote fully active controller with arrows working as expected. Is there a way of detecting from which remote the button press event came? GCController.current reports that iOS remote is current controller, notification GCControllerDidConnect and GCControllerDidBecomeCurrent does not help either. When I use iOS remote it becomes current controller and when I start using Siri remote and press button it does not become current controller. GCController.physicalInputProfile does not help either. It is as if iOS remote while being connected to tvOS has higher priority over Siri remote. Why don't I receive current controller change when pressing button on Siri remote? When pressing the button on Siri remote and receiving event, it comes from handler with GCController.microGamepad.elements.count == 10, instead of 17, so although I'm pressing button on Siri remote I'm getting events as if I pressed button on old remote (or iOS remote which is handled as old Siri remote). Even button.controller instance is wrong and holds a reference to iOS remote instead of the correct new Siri remote. I tried to look at button.aliases to see if I find "Cardinal Direction", which would hint at arrow press on new controller, but it does not contain this type of alias. I'm trying to find some hack for detection of currently used Siri remote, but without any success. It looks like somewhere in the core the GCController converts button presses to more generic events making the button presses anonymous (losing hardware info). But clearly the low level code must see from which controller the button press came. Any idea if there is a way of detecting from which remote controller the button press came?
0
0
1k
Feb ’24
Vision Pro Dev Kit question
Hi guys, has any individual develper received Vision Pro dev kit or is it just aimed at big companies? Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project. After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do). Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected. Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project. Any info on when pre-orders might be possible? Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit? Thanks.
0
0
1.1k
Dec ’23
macOS Sonoma 14 RC - Full Disk Access for app bundle is disabled after reboot (kTCCServiceSystemPolicyAllFiles)
Hi guys, has anyone seen this issue? When installing an application, which requires Full Disk Access (kTCCServiceSystemPolicyAllFiles), user enables this feature, but after reboot, OS automatically turns it off. Filed feedback in case it's a new issue. Any idea how to fix it? Any workaround to keep Full Disk Access enabled? Thanks.
16
0
4.6k
Sep ’23
tvOS: AVPlayerViewController.transportBarCustomMenuItems not working
Hi guys, Setting AVPlayerViewController.transportBarCustomMenuItems is not working on tvOS. I still see 2 icons for Audio and Subtitles. let menuItemAudioAndSubtitles = UIMenu( image: UIImage(systemName: "heart") ) playerViewController.transportBarCustomMenuItems = [menuItemAudioAndSubtitles] WWDC 2021 video is insufficient to make this work. https://developer.apple.com/videos/play/wwdc2021/10191/ The video doesn't say what exactly I need to do. Do I need to disable subtitle options? viewController.allowedSubtitleOptionLanguages = [] This didn't work and I still see the default icon loaded by the player. Do I need to create subclass of AVPlayerViewController? I just want to replace those 2 default icons by 1 icon as a test, but I was unsuccessful after many hours of work. Is it mandatory to define child menu items to the main item? Or do I perhaps need to define UIAction? The documentation and video are insufficient in providing guidance how to do that. I did something like this before, but that was more than 3 years ago and audi and subtitles was showing at the top of the player screen as tabs, if I rememebr correctly. Is transportBarCustomMenuItems perhaps deprecated? Is it possible that when loading AVPlayerItem and it detects audi and subtitles in the stream, it automatically resets AVPlayerViewController menu? How do I suppress this behavior? I'm currently loading AVPlayerViewController into SwiftUI interface. Is that perhaps the problem? Should I write SwiftUI player overlay from scratch? Thanks, Robert
1
0
902
Sep ’23
TVML Basic and Technical Questions from TVML Rookie
Hi Guys, I did lot of iOS, tvOS and macOS developemnt with UIKit and SwiftUI, but TVML appears to have really steep learning curve. When searching online and in this forum, posts are 7 years old and most often there are no replies. Is TVML something already obsolete and abandoned by Apple? Does it make sense to try quickly building streaming app using TVML? Is it better to start in SwiftUI and build templates from scratch and forget about TVML? What's nice that I can quickly build a page with collections of videos, however I struggle for several hours to do the simplest thing like adding a simple element with title, subtitle and play button. Why this doesn't display anything inside stackTemplate? <banner> <stack> <title>My Title</title> </stack> </banner> Why this doesn't display anything inside stackTemplate? <collectionList> <shelf id="row0"> <section binding="items:{videos}"> <prototypes> <lockup prototype="play" videoURL="https://storage.googleapis.com/media-session/sintel/trailer.mp4" > <title style="margin: 20; color: white; font-size: 60" binding="textContent:{title}"/> </lockup> </prototypes> </section> </shelf> If I add image to the loclkup element, it works and displays image and title. But if I want to add only title and subtitle and play button, it doesn't work. Is img manadatory inside lockup element? In documentation I cannot find, which subelement is mandatory and which is optional. Why cannot I add stack element as a prototype into the section element? Although I'm a native developer and don't really like HTML, I thought that TVML works in a similar way and I can add almost any element into any other element, like building HTML and TVML platform will stack up UI elements based on that. However it appears to be severely limited. Rules for even a basic element inside another layout element are so strict that I cannot tell ahead if it's going to work. Why doesn't it at least display elements that are defined correctly? Usually when one of the elements is invalid, then the whole section or shelf or container in general is hidden from view. It's impossible to build UI from the ground up. For example I thought I could add lockup element with title, see how it looks, than add subtitle and continue adding elements one by one. However if lockup with title simply doesn't display at all, I cannot figure out what's wrong, there is no error message in the console. Documentation is insufficient, it's not possible to figure out from that how each TVML UI elements works, what's mandatory and what's optional, sometimes I was actually add element inside another element even though docs didn't say it's possible. Like adding stack to some element, which I no longer rememeber. Is it possible to debug TVML somehow? Is it possible to debug JS in Xcode? Is there any TVML preview like in case of xib or SwiftUI? So far I always have to rebuild the app and look in the simulator screen. But often it caches the results and I have to delete the app from simulator, clean the build and rebuild again. It's very time consuming. Is there nay way to clear the cache (hot key)? Thanks.
0
1
858
Jul ’23
Is there any problem using NETransparentProxyNetworkSettings(tunnelRemoteAddress: "::1") ?
What if users don't have IPv6 enabled in TCP settings of the network adapter? Does is have any effect on NETransparentProxyNetworkSettings initialized with tunnelRemoteAddress set to IPv6? If yes, is it possible to use NETransparentProxyNetworkSettings(tunnelRemoteAddress: "localhost") and let the system figure out the address? Or is it better to just always use "127.0.0.1"? In some cases when user doesn't use IPv6 (for one reason on another) we see NETransparentProxyNetworkSettings configuration in System Preferences -&gt; Network to switch repeatedly and indefinitely between disconnected and connected state. Could it be caused by using "::1" as tunnelRemoteAddress? Thanks.
1
0
704
Jun ’23
Network filter in state "Connecting" when System Extension init is delayed
I found a problem where Network filter ends up in state "Connecting" when Network System Extension is slightly delayed during initialization. func main() { autoreleasepool { NEProvider.startSystemExtensionMode() } DoSomething() Thread.sleep(forTimeInterval: 5) // Simulating the task takes a little longer CFRunLoopRun() } main() This usually happens during Sysgem Extension upgrade and also possibly during reboot. When the client app registers system extension it get callback very quickly, becasue it has already been approved by the user. Then it calls loadFromPreferences() to load configuration, which already exists so it is quite fast, then Network filter switches into state "Connecting", however System Extension hasn't completed its init yet. Network filter remains in state Connecting and never recovers. I have to manually click "Disconnect" and it instantly connencts and filter ends up in proper state "Connected". Why doesn't the filter get into Connected state, when extension is finally fully initialized? I understand that system extension needs to initialize as quickly as possible, but it shouldn't be so unreliable if somebody adds extra 100ms to the init process or some undeterministic code sequence, whther it takes 0.1s or 2s. Is there a way to fix it by using standard Apple APIs? If not, how to resolve that issue? I see the following options: Run the Apple APIs startSystemExtensionMode(), etc. as fast as possible and do the other setup in background thread (updating network filter, adding filtering ports, etc) - however it still does not completely resolve the issue that System Extension init might be delayed by high CPU load (let's some other tasks are running or daemons are starting during reboot). Implement timer and when the timer expires, check if Network filter is in state "Connecting" and if not, disable and enable the filter to try reconnect (in this case there is no proxy server/daemon on the other side, we are not really connecting anywhere, just need to receive handleNewFlow() events). System Extension could notify client that it completed initialization and then client wpould check filter state and if still in "Connecting" state, it would try to disable and enebale it, so it could reconnect. Any suggestions? Thanks.
0
0
714
Apr ’23
XPC listener initialized in System Extesnion invalidates incoming connection under certain conditions
I found a problem where a process tries to connect to System Extension and connection is invalidated. XPC listener has to be disposed and initialized again. This happens when System Extension executes tasks in following order: NSXPCListener initialized NSXPCListener.resume() NSProvider.startSystemExtensionMode() Result: Connection is invalidated and not only that the client has to retry connection, nut also System Extension must reinitialize listener (execute step 1 and 2). However if I call NSProvider.startSystemExtensionMode() NSXPCListener initialized NSXPCListener.resume() It works as expected and even if the connection is invalidated/interrupted, client process can always reconnect and no other action is necessary in System Extension (no need to reinitialize XPC listener), In Apple docs about NSProvider.startSystemExtensionMode() it says that this method starts handling request, but in another online article written by Scott Knight I found that startSystemExtensionMode() also starts listener server. Is that right? PLease could you add this info into the docs if it is so? https://knight.sc/reverse%20engineering/2019/08/24/system-extension-internals.html I would like to use following logic: Call NSProvider.startSystemExtensionMode() only under certain circumstances - I have received some configuration that I need to process and do some setup. If I don't receive it, there is no reason to call startSystemExtensionMode() yet, I don't need to handle handleNewFlow() yet. Connect XPC client to System Extension under certain conditions. Ideally communicate with client even though System Extension is not handling network requests yet, that is without receiving handleNewFlow(). Basically I consider XPC and System Extension handling network requests as separate things. Is that correct, are they separate and independent? Does XPC communication really depend on calling startSystemExtensionMode()? Another potential issue: Is it possible that XPC listener fails to validate connection when client tries to connect before System Extension manages to complete init and park the main thread in CFRunLoop? Note: These querstions arose mostly from handling upgrades of System Extension (extension is already running, network filter is created and is connected and new version of the app upgrades System Exension). Thanks.
3
0
1k
Apr ’23
Endpoint Security System Extenisons and detecting access to sandboxed file
Hi guys, is there a way to detect if es_message_t reports an event where a process is trying to access file in sandbox? Is there a way to mute this type of event? I know it is possible to mute a process, but that process might still access other location than just sandbox. For example Apple's installd needs access to: /Library/InstallerSandboxes/.PKInstallSandboxManager/DE149785-B407-4C06-9571-5A2AA81D061E.activeSandbox Extracting file:///var/folders/rw/spcsf6q91wvdp7306_4fw9mh0000gn/C/com.apple.appstoreagent/com.apple.appstore/F566BEB3-D9FA-4C14-8ABF-1C2ED22FC90A/mzps15464275679007221414.pkg#OneNote.pkg (destination=/Library/InstallerSandboxes/.PKInstallSandboxManager/DE149785-B407-4C06-9571-5A2AA81D061E.activeSandbox/Root/Applications, uid=0) I would like to ignore this event. For example es_process_t has isPlatformBinary flag. Is there a similar flag for sandbox in es_message_t or es_file_t, e.g. isSandboxed? Thanks, Robert
3
0
963
Feb ’23
Too many ES_EVENT_TYPE_NOTIFY_CLOSE events without corresponding OPEN event
Hi guys, I'm debugging an issue where an application modifies several files and Endpoint Security System Extenison is receiving many ES_EVENT_TYPE_NOTIFY_CLOSE with modify flag set to 'true' for each file. There are other ES_EVENT_TYPE_NOTIFY_CLOSE events with modify=false, but those are caused by system processes md, mdsworker etc. which is expected. However I see very few ES_EVENT_TYPE_AUTH_OPEN events. For 1 file I see for example: 69 ES_EVENT_TYPE_NOTIFY_CLOSE modified=true and only 2 ES_EVENT_TYPE_AUTH_OPEN with flags=2 (which I think is FWRITE). I thought that for each ES_EVENT_TYPE_NOTIFY_CLOSE event I should have exactly 1 OPEN, CREATE, CLONE or any other relevant event. The documentation isn't very helpful, because it's probably only generated from the code. For ES_EVENT_TYPE_NOTIFY_CLOSE it says "An identifier for a process that notifies endpoint security that it is closing a file." https://developer.apple.com/documentation/endpointsecurity/es_event_type_t/es_event_type_notify_close When does ES_EVENT_TYPE_NOTIFY_CLOSE get triggered exactly? It doesn't look like it is fired only when closing a file. Is it possible that the application keeps file opened and performs seek operation and then writes a few bytes and after every write I get CLOSE event? Is it only an indication of the end of modification or is it indication of closed file? If the file is truly closed, why don't I see more OPEN events or to be exact why don't I see a sequence of events OPEN, CLOSE, OPEN, CLOSE...? Let's say that I need to scan file for viruses, when receiving ES_EVENT_TYPE_AUTH_OPEN I can block access to file, scan it and then allow access, but when receiving CLOSE notification with indication that file was modified, do I need to scan it again? If I receive CLOSE event with modify=true 50 times, do I need to scan the file every time to be sure that I capture modification with potentially dangerous content? Or is the modify flag set to true even if content does not change, and only file attributres are changed? Is there any documentation describing relation of ES_EVENT_* and dependencies between them? Thanks.
0
1
1.1k
Nov ’22
Check if App is notarized from app audit token or SecCode/SecStaticCode
Hi guys, is there a way of checking if app is notarized from app audit token or SecCode/SecStaticCode. In a firewall app I need to check if app is notarized. I'm able to detect if app comes from Apple or other developer, source of the app - macOS System, App Store, Developer, but I need to quickly check if app that comes from Developer is Notarized - it shouldn't take more than a few milliseconds, ideally less than 1ms. From terminal I tried spctl to see if app is Notatized, but that takes 5 - 7 seconds to get the result. Is there any interface in SystemSecurity or other framework get this info? Thanks.
1
0
2.2k
Jul ’20