Hello, I’m the developer of the “ StepSquad” app. Our app uses the Game Center achievement feature, but we’ve been encountering a problem: the “Global Players” metric always shows 0%, even though there are friends who have already achieved these achievements. Initially, I thought it might be because the app was newly launched. However, it’s now been over two months since release, and it’s still showing 0%. If anyone has any insight into this issue, please leave a comment.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I wanted to do something quite simple: Put a box on a wall or on the floor.
My box:
let myBox = ModelEntity(
mesh: .generateBox(size: SIMD3<Float>(0.1, 0.1, 0.01)),
materials: [SimpleMaterial(color: .systemRed, isMetallic: false)],
collisionShape: .generateBox(size: SIMD3<Float>(0.1, 0.1, 0.01)),
mass: 0.0)
For that I used Plane Detection to identify the walls and floor in the room. Then with SpatialTapGesture I was able to retrieve the position where the user is looking and tap.
let position = value.convert(value.location3D, from: .local, to: .scene)
And then positioned my box
myBox.setPosition(position, relativeTo: nil)
When I then tested it I realized that the box was not parallel to the wall but had a slightly inclined angle.
I also realized if I tried to put my box on the wall to my left the box was placed perpendicular to this wall and not placed on it.
After various searches and several attempts I ended up playing with transform.matrix to identify if the plane is wall or a floor, if it was in front of me or on the side and set up a rotation on the box to "place" it on the wall or a floor.
let surfaceTransform = surface.transform.matrix
let surfaceNormal = normalize(surfaceTransform.columns.2.xyz)
let baseRotation = simd_quatf(angle: .pi, axis: SIMD3<Float>(0, 1, 0))
var finalRotation: simd_quatf
if acos(abs(dot(surfaceNormal, SIMD3<Float>(0, 1, 0)))) < 0.3 {
logger.info("Surface: ceiling/floor")
finalRotation = simd_quatf(angle: surfaceNormal.y > 0 ? 0 : .pi, axis: SIMD3<Float>(1, 0, 0))
} else if abs(surfaceNormal.x) > abs(surfaceNormal.z) {
logger.info("Surface: left/right")
finalRotation = simd_quatf(angle: surfaceNormal.x > 0 ? .pi/2 : -.pi/2, axis: SIMD3<Float>(0, 1, 0))
} else {
logger.info("Surface: front/back")
finalRotation = baseRotation
}
Playing with matrices is not really my thing so I don't know if I'm doing it right.
Could you tell me if my tests for the orientation of the walls are correct? During my tests I don't always correctly identify whether the wall is in front or on the side.
Is this generally the right way to do it?
Is there an easier way to do this?
Regards
Tof
Can VisionOS take screenshots besides simultaneously pressing buttons
Topic:
Graphics & Games
SubTopic:
RealityKit
I'm trying to use MTLBinaryArchive. I collected a BinaryArchive from one device and used metal-tt to translate it for all supported iPhone devices, ranging from iPhone 7 Plus to iPhone 16.
However, this BinaryArchive is quite large, around 1.5GB uncompressed, and about 500MB compressed in the IPA. I'm wondering how to address the size issue.
I watched the WWDC 2022 video, which mentioned that the operating system or app installation process would handle compatibility. Does this compatibility support different GPU chips? I tried installing an IPA with a BinaryArchive collected only from an iPhone 12 on an iPhone 13, but the BinaryArchive didn't take effect.
I also saw that Apple supports App Thinning. However, it seems that resources in the Asset Catalog cannot be accessed via URL, and creating an MTLBinaryArchive requires a URL. Is it possible for MTLBinaryArchive to be distributed through App Thinning?
The WWDC 2022 video also mentioned using the -Os optimization flag to reduce size. Can this give an estimate of how much compression it would achieve? Are there any methods to solve the BinaryArchive size issue without impacting performance?
Topic:
Graphics & Games
SubTopic:
Metal
Once GKAccessPoint is active, then enter an arview page, the arview will lose camera feed.
OSVersion: iOS 18.0.1, iOS 18.1
iPhone(14 Pro Max)で端末の画面にリフレッシュレートを表示させたいのですが、どなたか方法をご存知ないでしょうか?
Topic:
Graphics & Games
SubTopic:
General
Hi,
I can capture a frame on the Apple TV, but when I try to profile the capture for GPU timing information, I got "Abort Trap 6" error and with following error in the report:
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Triggered by Thread: 7
Application Specific Information:
abort() called
Last Exception Backtrace:
0 CoreFoundation 0x18c0a99d0 __exceptionPreprocess + 160
1 libobjc.A.dylib 0x18b596d24 objc_exception_throw + 71
2 CoreFoundation 0x18bfa7308 -[__NSArrayM insertObject:atIndex:] + 1239
3 MTLReplayController 0x101f5d148 DYMTLReplayFrameProfiler_loadAnalysis + 1140
4 MTLReplayController 0x101e97f90 GTMTLReplayClient_collectGPUShaderTimelineData + 224
5 MTLReplayController 0x101e81794 __30-[GTMTLReplayService profile:]_block_invoke_4 + 288
6 Foundation 0x18eb6072c __NSOPERATION_IS_INVOKING_MAIN__ + 11
7 Foundation 0x18eb5cc1c -[NSOperation start] + 623
8 Foundation 0x18eb60edc __NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 11
9 Foundation 0x18eb60bc4 __NSOQSchedule_f + 167
10 libdispatch.dylib 0x18b8d6a84 _dispatch_block_async_invoke2 + 103
11 libdispatch.dylib 0x18b8c9420 _dispatch_client_callout + 15
12 libdispatch.dylib 0x18b8cc5d0 _dispatch_continuation_pop + 531
13 libdispatch.dylib 0x18b8cbcd4 _dispatch_async_redirect_invoke + 635
14 libdispatch.dylib 0x18b8d9224 _dispatch_root_queue_drain + 335
15 libdispatch.dylib 0x18b8d9a08 _dispatch_worker_thread2 + 163
16 libsystem_pthread.dylib 0x18b6e652c _pthread_wqthread + 223
17 libsystem_pthread.dylib 0x18b6ed8d0 start_wqthread + 7
It's Xcode 16.0 + Apply TV 4K (4th Gen) tvOS 18, does anyone know what's the cause of this error and is there any solution for it?
Thank you very much,
Kai
Topic:
Graphics & Games
SubTopic:
Metal
I decided to use a club to kick a ball and let it roll on the turf in RealityKit, but now I can only let it slide but can not roll.
I add collision on the turf(static), club (kinematic) and the ball(dynamic), and set some parameters: radius, mass.
Using these parameters calculate linear damping, inertia, besides, use time between frames and the club position to calculate speed. Code like these:
let radius: Float = 0.025
let mass: Float = 0.04593 // 质量,单位:kg
var inertia = 2/5 * mass * pow(radius, 2)
let currentPosition = entity.position(relativeTo: nil)
let distance = distance(currentPosition, rgfc.lastPosition)
let deltaTime = Float(context.deltaTime)
let speed = distance / deltaTime
let C_d: Float = 0.47 //阻力系数
let linearDamping = 0.5 * 1.2 * pow(speed, 2) * .pi * pow(radius, 2) * C_d //线性阻尼(1.2表示空气密度)
entity.components[PhysicsBodyComponent.self]?.massProperties.inertia = SIMD3<Float>(inertia, inertia, inertia)
entity.components[PhysicsBodyComponent.self]?.linearDamping = linearDamping
// force
let acceleration = speed / deltaTime
let forceDirection = normalize(currentPosition - rgfc.lastPosition)
let forceMultiplier: Float = 1.0
let appliedForce = forceDirection * mass * acceleration * forceMultiplier
entityCollidedWith.addForce(appliedForce, at: rgfc.hitPosition, relativeTo: nil)
Also I try to applyImpulse but not addForce, like:
let linearImpulse = forceDirection * speed * forceMultiplier * mass
No matter how I adjust the friction(static, dynamic) and restitution, using addForce or applyImpulse, the ball can only slide. How can I solve this problem?
I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen.
I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in:
public override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.translateBy(x: radius, y: radius)
context.scaleBy(x: scale, y: scale)
context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y)
removeFromSuperview()
magnifiedView?.layer.render(in: context)
magnifiedView?.addSubview(self)
}
can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass.
Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work.
Any pointers would be greatly appreciated.
.
.
.
Full code below:
import UIKit
public class MagnifyingGlassView: UIView {
public weak var magnifiedView: UIView? = nil {
didSet {
removeFromSuperview()
magnifiedView?.addSubview(self)
}
}
public var magnifiedPoint: CGPoint = .zero {
didSet {
center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y)
}
}
public var offset: CGPoint = .zero
public var radius: CGFloat = 50 {
didSet {
frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2))
layer.cornerRadius = radius
crosshair.path = crosshairPath(for: radius)
}
}
public var scale: CGFloat = 2
public var borderColor: UIColor = .lightGray {
didSet {
layer.borderColor = borderColor.cgColor
}
}
public var borderWidth: CGFloat = 3 {
didSet {
layer.borderWidth = borderWidth
}
}
public var showsCrosshair = true {
didSet {
crosshair.isHidden = !showsCrosshair
}
}
public var crosshairColor: UIColor = .lightGray {
didSet {
crosshair.strokeColor = crosshairColor.cgColor
}
}
public var crosshairWidth: CGFloat = 5 {
didSet {
crosshair.lineWidth = crosshairWidth
}
}
private let crosshair: CAShapeLayer = CAShapeLayer()
public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) {
self.init(frame: .zero)
layer.masksToBounds = true
layer.addSublayer(crosshair)
defer {
self.offset = offset
self.radius = radius
self.scale = scale
self.borderColor = borderColor
self.borderWidth = borderWidth
self.showsCrosshair = showsCrosshair
self.crosshairColor = crosshairColor
self.crosshairWidth = crosshairWidth
}
}
public func magnify(at point: CGPoint) {
guard magnifiedView != nil else { return }
magnifiedPoint = point
layer.setNeedsDisplay()
}
private func crosshairPath(for radius: CGFloat) -> CGPath {
let path = CGMutablePath()
path.move(to: .init(x: radius, y: 0))
path.addLine(to: .init(x: radius, y: bounds.height))
path.move(to: .init(x: 0, y: radius))
path.addLine(to: .init(x: bounds.width, y: radius))
return path
}
public override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.translateBy(x: radius, y: radius)
context.scaleBy(x: scale, y: scale)
context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y)
removeFromSuperview()
magnifiedView?.layer.render(in: context)
//If above disabled, no change
//Possible that nothing's being rendered into context
//Could it be that SKScene view has no layer?
magnifiedView?.addSubview(self)
}
}
I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView.
I know there are the following modifiers to add some navigation
.realityViewCameraControls(.orbit)
.realityViewCameraControls(.dolly)
.realityViewCameraControls(.pan)
But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that?
Thanks,
Guillermo
I have run into an issue where I am trying to use atomic_float in a swift package but I cannot get things to compile because it appears that the Swift Package Manager doesn't support Metal 3 (atomic_float is Metal 3 functionality). Is there any way around this? I am using
// swift-tools-version: 6.1
and my Metal code includes:
#include <metal_stdlib>
#include <metal_geometric>
#include <metal_math>
#include <metal_atomic>
using namespace metal;
kernel void test(device atomic_float* imageBuffer [[buffer(1)]],
uint id [[ thread_position_in_grid ]]) {
}
But I get an error on the definition of atomic_float .
Any help, one more importantly, where I could have found this information about this limitation, would be helpful.
-RadBobby
Topic:
Graphics & Games
SubTopic:
Metal
Hello! I'm currently porting a videogame console emulator to iOS and I'm trying to make the renderer (tested on MacOS) work on iOS as well.
The emulator core is written in C++ and uses metal-cpp for rendering, whereas the iOS frontend is written in Swift with SwiftUI. I have an Objective-C++ bridging header for bridging the Swift and C++ sides.
On the Swift side, I create an MTKView. Inside the MTKView delegate, I run the emulator for 1 video frame and pass it the view's backing layer for it to render the final output image with. The emulator runs and returns, but when it returns I get a crash in Swift land (callstack attached below), inside objc_release, which indicates I'm doing something wrong with memory management.
My bridging interface (ios_driver.h):
#pragma once
#include <Foundation/Foundation.h>
#include <QuartzCore/QuartzCore.h>
void iosCreateEmulator();
void iosRunFrame(CAMetalLayer* layer);
Bridge implementation (ios_driver.mm):
#import <Foundation/Foundation.h>
extern "C" {
#include "ios_driver.h"
}
<...>
#define IOS_EXPORT extern "C" __attribute__((visibility("default")))
std::unique_ptr<Emulator> emulator = nullptr;
IOS_EXPORT void iosCreateEmulator() { ... }
// Runs 1 video frame of the emulator and
IOS_EXPORT void iosRunFrame(CAMetalLayer* layer) {
void* layerBridged = (__bridge void*)layer;
// Pass the CAMetalLayer to the emulator
emulator->getRenderer()->setMTKLayer(layerBridged);
// Runs the emulator for 1 frame and renders the output image using our layer
emulator->runFrame();
}
My MTKView delegate:
class Renderer: NSObject, MTKViewDelegate {
var parent: ContentView
var device: MTLDevice!
init(_ parent: ContentView) {
self.parent = parent
if let device = MTLCreateSystemDefaultDevice() {
self.device = device
}
super.init()
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {}
func draw(in view: MTKView) {
var metalLayer = view.layer as! CAMetalLayer
// Run the emulator for 1 frame & display the output image
iosRunFrame(metalLayer)
}
}
Finally, the emulator's render function that interacts with the layer:
void RendererMTL::setMTKLayer(void* layer) {
metalLayer = (CA::MetalLayer*)layer;
}
void RendererMTL::display() {
CA::MetalDrawable* drawable = metalLayer->nextDrawable();
if (!drawable) {
return;
}
MTL::Texture* texture = drawable->texture();
<rest of rendering follows here using the drawable & its texture>
}
This is the Swift callstack at the time of the crash:
To my understanding, I shouldn't be violating ARC rules as my bridging header uses CAMetalLayer* instead of void* and Swift will automatically account for ARC when passing CoreFoundation objects to Objective-C. However I don't have any other idea as to what might be causing this. I've been trying to debug this code for a couple of days without much success.
If you need more info, the emulator code is also on Github
Metal renderer: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/core/renderer_mtl/renderer_mtl.cpp#L58-L68
Bridge implementation: https://github.com/wheremyfoodat/Panda3DS/blob/ios/src/ios_driver.mm
Bridging header: https://github.com/wheremyfoodat/Panda3DS/blob/ios/include/ios_driver.h
Any help is more than appreciated. Thank you for your time in advance.
Hi everyone,
I'm using the Vision framework’s ImageAestheticsScoresObservation class (https://developer.apple.com/documentation/vision/imageaestheticsscoresobservation).
I noticed that the overallScore returned sometimes gives negative values. Could someone confirm whether the expected range of the score is from -1.0 to 1.0?
The documentation doesn’t explicitly state the possible score range, so I’d appreciate any clarification or insights.
Thanks in advance!
I am trying to create an empty metadata, and set the HDRGainMapHeadroom at xxx. However the final returned mutableMetadata doesn't contain the HDRGainMap:HDRGainMapVersion or HDRGainMap:HDRGainMapHeadroom. But iio:hasXMP exist.
why? Is that the reason that the namespace HDRGainMap is private?
func createHDRGainMapMetadata(version: Int, headroom: Double) -> CGImageMetadata? {
// Create a mutable metadata object
let mutableMetadata = CGImageMetadataCreateMutable()
// Define the namespace for HDRGainMap
let namespace = "HDRGainMap"
let xmpKeyPath = "iio:hasXMP"
let xmpValue = String(true)
// Set the HDRGainMapVersion item
let versionKeyPath = "\(namespace):HDRGainMapVersion"
let versionValue = String(version)
// Set the version value
let xmpSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, xmpKeyPath as CFString, xmpValue as CFString)
if xmpSetResult == false {
print("Failed to set xmp")
}
// Set the version value
let versionSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, versionKeyPath as CFString, versionValue as CFString)
if versionSetResult == false {
print("Failed to set HDRGainMapVersion")
}
// Set the HDRGainMapHeadroom item
let headroomKeyPath = "\(namespace):HDRGainMapHeadroom"
let headroomValue = String(headroom)
// Set the headroom value
let headroomSetResult = CGImageMetadataSetValueWithPath(mutableMetadata, nil, headroomKeyPath as CFString, headroomValue as CFString)
if headroomSetResult == false {
print("Failed to set HDRGainMapHeadroom")
}
return mutableMetadata
}
Hi,
I'm trying to add game center challenges and activities to an already live game, but they are not appearing in game for testing, GameCenter, or the Games app.
I know the game is setup with GameKit entitlements since this is a live game and it has working leaderboards and achievements.
I've updated to Tahoe beta 8, added a challenge and activity on app store connect, added that to a new distribution and added that distribution to 'Add for Review'
I'm using Unity and the Apple Unity plugin
Not sure what other steps I'm missing
Thanks
Using Swift, how do I resize individual SKSpriteNodes for various iPad device sizes?
Currently, I use theScene.scaleMode = .resizeFill for scaling the entire SKScene and it works as advertised. Please note that .aspectFill does not solve the challenge described below.
My challenge is to resize individual SKSpriteNodes (that are components of the overall SKScene) based on size of the iOS device; for example, iPad Mini <--> iPad Pro
Right now, I hard code the sizes of these Nodes. But I realize this is not the optimum approach.
Again, hard coding is frowned upon. So I am looking for a more dynamic method that functions based on device size.
Topic:
Graphics & Games
SubTopic:
SpriteKit
Hi
I've noticed one issue in Metal HUD, but I'm not sure if it is a bug in the Metal HUD or if there is a purpose for this behavior.
Metal HUD has an option to send the data to system log in raw format where the numbers are like
metal-HUD: ,,,,,...,
https://developer.apple.com/documentation/xcode/monitoring-your-metal-apps-graphics-performance/
If the HUD is displayed, it works just fine, but it seems that when the HUD is hidden (with shift-F9), it still send the data to system log, but the numbers are the same all the time and are not updated while is still being updated.
I would expect that it should log the data no matter if the HUD is displayed or not, this of course leads to incorrect FPS calculations
Here is an example of the system log entries when the HUD is not visible:
Topic:
Graphics & Games
SubTopic:
General
For some reason I can't disable the Graphics HUD.
Not really a problem for development, but it's also showing in Testflight apps.
For example when swiping down on the keyboard but also in some other places.
Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work.
Is this a known issue?
I already scrolled through possibly every Google search result but I can't figure out how to solve this.
Topic:
Graphics & Games
SubTopic:
General
I recently needed to develop an application to obtain the window list, which requires Screen Recording permissions. Apple's official documentation mentions using the two functions CGPreflightScreenCaptureAccess and CGRequestScreenCaptureAccess to request permissions. These functions are stated to be available since version 10.15. However, when I used these two functions on a device running macOS 10.15.7, I encountered the errors shown in the attached screenshot. I used the nm tool to inspect the symbols in the CoreGraphics.framework and found that these two functions were not present. Could you help me understand why this is happening?
Topic:
Graphics & Games
SubTopic:
General
Hi,
I'm working with a very simple app that tries to read a coordinates card and past the data into diferent fields. The card's layout is COLUMNS from 1-10, ROWs from A-J and a two digit number for each cell. In my app, I have field for each of those cells (A1, A2...). I want that OCR to read that card and paste the info but I just cant. I have two problems. The camera won't close. It remains open until I press the button SAVE (this is not good because a user could take 3, 4, 5... pictures of the same card with, maybe, different results, and then? Which is the good one?). Then, after I press save, I can see the OCR kinda works ( the console prints all the date read) but the info is not pasted at all.
Any idea? I know is hard to know what's wrong but I've tried chatgpt and all it does... just doesn't work
This is the code from the scanview
import SwiftUI
import Vision
import VisionKit
struct ScanCardView: UIViewControllerRepresentable {
@Binding var scannedCoordinates: [String: String]
var useLettersForColumns: Bool
var numberOfColumns: Int
var numberOfRows: Int
@Environment(.presentationMode) var presentationMode
func makeUIViewController(context: Context) -> VNDocumentCameraViewController {
let scannerVC = VNDocumentCameraViewController()
scannerVC.delegate = context.coordinator
return scannerVC
}
func updateUIViewController(_ uiViewController: VNDocumentCameraViewController, context: Context) {}
func makeCoordinator() -> Coordinator {
return Coordinator(self)
}
class Coordinator: NSObject, VNDocumentCameraViewControllerDelegate {
let parent: ScanCardView
init(_ parent: ScanCardView) {
self.parent = parent
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {
print("Escaneo completado, procesando imagen...")
guard scan.pageCount > 0, let image = scan.imageOfPage(at: 0).cgImage else {
print("No se pudo obtener la imagen del escaneo.")
controller.dismiss(animated: true, completion: nil)
return
}
recognizeText(from: image)
DispatchQueue.main.async {
print("Finalizando proceso OCR y cerrando la cámara.")
controller.dismiss(animated: true, completion: nil)
}
}
func documentCameraViewControllerDidCancel(_ controller: VNDocumentCameraViewController) {
print("Escaneo cancelado por el usuario.")
controller.dismiss(animated: true, completion: nil)
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFailWithError error: Error) {
print("Error en el escaneo: \(error.localizedDescription)")
controller.dismiss(animated: true, completion: nil)
}
private func recognizeText(from image: CGImage) {
let request = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation], error == nil else {
print("Error en el reconocimiento de texto: \(String(describing: error?.localizedDescription))")
DispatchQueue.main.async {
self.parent.presentationMode.wrappedValue.dismiss()
}
return
}
let recognizedStrings = observations.compactMap { observation in
observation.topCandidates(1).first?.string
}
print("Texto reconocido: \(recognizedStrings)")
let filteredCoordinates = self.filterValidCoordinates(from: recognizedStrings)
DispatchQueue.main.async {
print("Coordenadas detectadas después de filtrar: \(filteredCoordinates)")
self.parent.scannedCoordinates = filteredCoordinates
}
}
request.recognitionLevel = .accurate
let handler = VNImageRequestHandler(cgImage: image, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try handler.perform([request])
print("OCR completado y datos procesados.")
} catch {
print("Error al realizar la solicitud de OCR: \(error.localizedDescription)")
}
}
}
private func filterValidCoordinates(from strings: [String]) -> [String: String] {
var result: [String: String] = [:]
print("Texto antes de filtrar: \(strings)")
for string in strings {
let trimmedString = string.replacingOccurrences(of: " ", with: "")
if parent.useLettersForColumns {
let pattern = "^[A-J]\\d{1,2}$" // Letras de A-J seguidas de 1 o 2 dígitos
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (letras): \(trimmedString)")
result[trimmedString] = "Valor" // Asignación de prueba
}
} else {
let pattern = "^[1-9]\\d{0,1}$" // Solo números, de 1 a 99
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (números): \(trimmedString)")
result[trimmedString] = "Valor"
}
}
}
print("Coordenadas finales después de filtrar: \(result)")
return result
}
}
}