Post not yet marked as solved
I'm using VNImageRequestHandler to recognize text using the camera. In my handler I'm using the topLeft, topRight, bottomLeft, bottomRight properties, which I'm scaling to the size of the canvas, to draw an outline around each text object. When I do this the Y position and Height are correct, but the Width is slightly smaller, and the X position centers the outline around the text. Any idea why this would be a different size?
Post not yet marked as solved
Question
What may cause this crash above iOS 14 or 15
Here are images about crash stack tree below:
Code to draw a graph of data. Each abscissa has an ordinate range to be displayed as a line segment.
All data, i.e., scaled points are verified to be within the declared analysisView.bounds. strokeColors are verified within the range 0..1
BTW, no I don't need animation for this static data, but CALayer seemed to require more coding, and I found fewer code examples for it.
The code below has two problems:
1) it doesn't draw into the window
the weird behavior of min/max
The first is why I am posting. What am I missing?
import AppKit
class AnalysisViewController: NSViewController {
@IBOutlet var analysisView: NSView!
var ranges = [ClosedRange<Double>]()
var ordinateMinimum = CGFloat()
var ordinateMaximum = CGFloat()
var ordinateScale = CGFloat()
let abscissaMinimum:CGFloat = 1
let abscissaMaximum:CGFloat = 92
let abscissaScale :CGFloat = 800/92
let shapeLayer = CAShapeLayer()
var points = [CGPoint]() // created just to verify (in debugger area) that points are within analysisView.bounds
func genrateGraph() {
// ranges.append(0...0) // inexplicably FAILS! @ ordinateMinimum/ordinateMaximum if replaces "if N == 1" below
// ranges.append(0.1...0.1) // non-zero range does not fail but becomes the min or max, therefore, not useful
for N in 1...92 {
if let element = loadFromJSON(N) {
if N == 1 { ranges.append( element.someFunction() ) } // ranges[0] is an unused placeholder
// if N == 1 { ranges.append(0...0) } // inexplicably FAILS! @ ordinateMinimum/ordinateMaximum if replacing above line
ranges.append( element.someFunction() )
}
else { ranges.append(0...0) } // some elements have no range data
}
ordinateMinimum = CGFloat(ranges.min(by: {$0 != 0...0 && $1 != 0...0 && $0.lowerBound < $1.lowerBound})!.lowerBound)
ordinateMaximum = CGFloat(ranges.max(by: {$0 != 0...0 && $1 != 0...0 && $0.upperBound < $1.upperBound})!.upperBound)
ordinateScale = analysisView.frame.height/(ordinateMaximum - ordinateMinimum)
for range in 1..<ranges.count {
shapeLayer.addSublayer(CALayer()) // sublayer each abscissa range so that .strokeColor can be assigned to each
// shapeLayer.frame = CGRect(x: 0, y: 0, width: analysisView.frame.width, height: analysisView.frame.height) // might be unneccessary
let path = CGMutablePath() // a new path for every sublayer, i.e., range that is displayed as line segment
points.append(CGPoint(x: CGFloat(range)*abscissaScale, y: CGFloat(ranges[range].lowerBound)*ordinateScale))
path.move(to: points.last! )
points.append(CGPoint(x: CGFloat(range)*abscissaScale, y: CGFloat(ranges[range].upperBound)*ordinateScale))
path.addLine(to: points.last! )
path.closeSubpath()
shapeLayer.path = path
// shapeLayer.strokeColor = CGColor.white
let r:CGFloat = 1.0/CGFloat(range)
let g:CGFloat = 0.3/CGFloat(range)
let b:CGFloat = 0.7/CGFloat(range)
// print("range: \(range)\tr: \(r)\tg: \(g)\tb: \(b)") // just to verify 0...1 values
shapeLayer.strokeColor = CGColor(srgbRed: r, green: g, blue: b, alpha: 1.0)
}
}
override func viewDidLoad() {
super.viewDidLoad()
view.wantsLayer = true // one of these (view or analysisView) must be unneccessary
view.frame = CGRect(x: 0, y: 0, width: 840, height: 640)
analysisView.wantsLayer = true
analysisView.frame = CGRect(x: 0, y: 0, width: 840, height: 640)
genrateGraph()
}
}
Post not yet marked as solved
I have a CALayer with many sublayers. Those sublayers have multiple CABasicAnimation added to them.
Now, I'd like to render the whole layer subtree to the UIImage at a specific point of animation time. How could I achieve that?
The only thing I found is a CALayer.render(in:) method but the docs say that this method ignores Core Animations :<
Post not yet marked as solved
I'm trying to add an animated CALayer over my video and export it with AVAssetExportSession.
I'm animating the layer using CABasicAnimation set to my custom property.
However, it seems that func draw(in ctx: CGContext) is never called during an export for my custom layer, and no animation is played.
I found out that animating standard properties like borderWidth works fine, but custom properties are ignored.
Can someone help with that?
func export(standard: Bool) {
print("Exporting...")
let composition = AVMutableComposition()
//composition.naturalSize = CGSize(width: 300, height: 300)
// Video track
let videoTrack = composition.addMutableTrack(withMediaType: .video,
preferredTrackID: CMPersistentTrackID(1))!
let _videoAssetURL = Bundle.main.url(forResource: "emptyVideo", withExtension: "mov")!
let _emptyVideoAsset = AVURLAsset(url: _videoAssetURL)
let _emptyVideoTrack = _emptyVideoAsset.tracks(withMediaType: .video)[0]
try! videoTrack.insertTimeRange(CMTimeRange(start: .zero, duration: _emptyVideoAsset.duration),
of: _emptyVideoTrack, at: .zero)
// Root Layer
let rootLayer = CALayer()
rootLayer.frame = CGRect(origin: .zero, size: composition.naturalSize)
// Video layer
let video = CALayer()
video.frame = CGRect(origin: .zero, size: composition.naturalSize)
rootLayer.addSublayer(video)
// Animated layer
let animLayer = CustomLayer()
animLayer.progress = 0.0
animLayer.frame = CGRect(origin: .zero, size: composition.naturalSize)
rootLayer.addSublayer(animLayer)
animLayer.borderColor = UIColor.green.cgColor
animLayer.borderWidth = 0.0
let key = standard ? "borderWidth" : "progress"
let anim = CABasicAnimation(keyPath: key)
anim.fromValue = 0.0
anim.toValue = 50.0
anim.duration = 6.0
anim.beginTime = AVCoreAnimationBeginTimeAtZero
anim.isRemovedOnCompletion = false
animLayer.add(anim, forKey: nil)
// Video Composition
let videoComposition = AVMutableVideoComposition(propertiesOf: composition)
videoComposition.renderSize = composition.naturalSize
videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
// Animation tool
let animTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: video,
in: rootLayer)
videoComposition.animationTool = animTool
// Video instruction > Basic
let videoInstruction = AVMutableVideoCompositionInstruction()
videoInstruction.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
videoComposition.instructions = [videoInstruction]
// Video-instruction > Layer instructions
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
videoInstruction.layerInstructions = [layerInstruction]
// Session
let exportSession = AVAssetExportSession(asset: composition,
presetName: AVAssetExportPresetHighestQuality)!
exportSession.videoComposition = videoComposition
exportSession.shouldOptimizeForNetworkUse = true
var url = FileManager.default.temporaryDirectory.appendingPathComponent("\(arc4random()).mov")
url = URL(fileURLWithPath: url.path)
exportSession.outputURL = url
exportSession.outputFileType = .mov
_session = exportSession
exportSession.exportAsynchronously {
if let error = exportSession.error {
print("Fail. \(error)")
} else {
print("Ok")
print(url)
DispatchQueue.main.async {
let vc = AVPlayerViewController()
vc.player = AVPlayer(url: url)
self.present(vc, animated: true) {
vc.player?.play()
}
}
}
}
}
CustomLayer:
class CustomLayer: CALayer {
@NSManaged var progress: CGFloat
override init() {
super.init()
}
override init(layer: Any) {
let l = layer as! CustomLayer
super.init(layer: layer)
print("Copy. \(progress) \(l.progress)")
self.progress = l.progress
}
required init?(coder: NSCoder) {
super.init(coder: coder)
}
override class func needsDisplay(forKey key: String) -> Bool {
let needsDisplayKeys = ["progress"]
if needsDisplayKeys.contains(key) {
return true
}
return super.needsDisplay(forKey: key)
}
override func display() {
print("Display. \(progress) | \(presentation()?.progress)")
super.display()
}
override func draw(in ctx: CGContext) {
// Save / restore ctx
ctx.saveGState()
defer { ctx.restoreGState() }
print("Draw. \(progress)")
ctx.move(to: .zero)
ctx.addLine(to: CGPoint(x: bounds.size.width * progress,
y: bounds.size.height * progress))
ctx.setStrokeColor(UIColor.red.cgColor)
ctx.setLineWidth(40)
ctx.strokePath()
}
}
Here's a full sample project if someone is interested:
https://www.dropbox.com/s/evkm60wkeb2xrzh/BrokenAnimation.zip?dl=0
Post not yet marked as solved
Nach dem Update auf Beta 9, 12.0 Beta (21A5543b) können Sie die Auflösung nicht mehr ändern!
Vorher in beta 8 ging es ohne Probleme starte das MacBook Pro neu ohne die Auflösung geändert zu haben
hat jemand eine Idee oder ein bug ??
habe ich auch schon so gemeldet
Post not yet marked as solved
Since update to iOS 15, our app is crashing in the first controller, right after splash screen(log attached).
To be mentioned that if app goes to background right after being active, it will only crash after i switch it back in foreground.
At crash it gets me right to int main() without any relevant info to help me debug, except (maybe) this line in the console "[Unknown process name] CGBitmapContextCreateWithCallbacks: failed to create CGAutomaticBitmapContextInfo."
I've tried using all exceptions breakpoint but didn't help debugging further. It seems to be an issue in storyboard and not a specific line of code.
At start, the app has a tabBarController which integrates some controllers. The first goes like viewDidLoad, viewWillAppear, viewWillLayoutSubviews, viewDidLayoutSubviews and then dies before reaching viewDidAppear.
Would appreciate some help
2021-09-27_10-34-54.4864_+0300-9047901996b77362f454f5d9fb324f4ab72e4b5d.crash
Post not yet marked as solved
Hi There,
I just bought a IIYAMA G-MASTER GB3461WQSU-B1 which has a native resolution of 3440x1440 but my MacBook Pro (Retina, 15-inch, Mid 2014) doesn't recognise the monitor and I can't run it at its full resolution. It is currently recognised as a PL3461WQ 34.5-inch (2560 x 1440).
Is there anything that I can do to get it sorted or I will have to wait until this monitor driver is added to the Big Sur list?
Thanks
I have doubts about Core Image coordinate system, way transforms are applied and way the image extent is determined. I couldn't find much in documentation or on internet so I tried the following code to rotate CIImage and display it in UIImageView. As I understand there is no absolute coordinate system in Core Image. The bottom left corner of an image is supposed to be (0,0). But my experiments show something else.
I created a prototype to rotate a CIImage by pi/10 radians on each button click. Here is the code I wrote.
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
imageView.contentMode = .scaleAspectFit
let uiImage = UIImage(contentsOfFile: imagePath)
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
imageView.image = uiImage
}
private var currentAngle = CGFloat(0)
private var ciImage:CIImage!
private var ciContext = CIContext()
@IBAction func rotateImage() {
let extent = ciImage.extent
let translate = CGAffineTransform(translationX: extent.midX, y: extent.midY)
let uiImage = UIImage(contentsOfFile: imagePath)
currentAngle = currentAngle + CGFloat.pi/10
let rotate = CGAffineTransform(rotationAngle: currentAngle)
let translateBack = CGAffineTransform(translationX: -extent.midX, y: -extent.midY)
let transform = translateBack.concatenating(rotate.concatenating(translate))
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
ciImage = ciImage.transformed(by: transform)
NSLog("Extent \(ciImage.extent), Angle \(currentAngle)")
let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent)
imageView.image = UIImage(cgImage: cgImage!)
}
But in the logs, I see the extent of images have negative origin.x and origin.y. What does it mean? Relative to whom it is negative and where exactly is (0,0) then? What exactly is image extent and how does Core Image coordinate system work?
2021-09-24 14:43:29.280393+0400 CoreImagePrototypes[65817:5175194] Metal API Validation Enabled
2021-09-24 14:43:31.094877+0400 CoreImagePrototypes[65817:5175194] Extent (-105.0, -105.0, 1010.0, 1010.0), Angle 0.3141592653589793
2021-09-24 14:43:41.426371+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.6283185307179586
2021-09-24 14:43:42.244703+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.9424777960769379
Post not yet marked as solved
I have some uncontroversial code that used to work perfectly in iOS 12 (and I think 13, from memory):
let cropRect = mapVC.view.frame.inset(by: mapVC.view.safeAreaInsets).inset(by: mapVC.mapEdgeInsets)
let mapRenderer = UIGraphicsImageRenderer(bounds: cropRect)
let img = renderer.image(actions: { _ in
mapVC.view.drawHierarchy(in: mapVC.view.bounds, afterScreenUpdates: true)
})
Now, I'm getting a partially rendered image (only the top 8-10 px or so - the rest is white/blank), and this message in the debugger:
[Unknown process name] vImageConvert_AnyToAny - failed width = 0 height = 1 dst component = 16bit float dstLayout = ByteOrder16Little dstBytesPerRow = 0 src component = 16bit integer srcLayout = ByteOrder16Little srcBytesPerRow = 0
There seems to be very little online about this error and I have no clue where to start with it.
The issue arises when calling .jpegData or .pngData methods on mapRenderer.
No other changes to the view hierarchy or the rendered view since this was working just fine.
Any suggestions?
Xcode 12.4, iOS 14.3 running on an iPhone XS Max.
Post not yet marked as solved
I really love Quartz Composer from Apple which is a quite old app, not updated for years. It works well on my 2015 mid MacBook Pro, but not on new M1 iMac. Does anyone know how to run this great app on my new machine? Thank you!
I try to rotate a page 180° in a pdf file.
I nearly get it, but the page is also mirrored horizontally.
Some images to illustrate:
Initial page:
Result after rotation (see code): it is rotated 180° BUT mirrored horizontally as well:
The expected result
It is just as if it was rotated 180°, around the x axis of the page. And I would need to rotate 180° around z axis (perpendicular to the page). It is probably the result of
writeContext!.scaleBy(x: 1, y: -1)
I have tried a lot of changes for transform, translate, scale parameters, including removing calls to some of them, to no avail.
@IBAction func createNewPDF(_ sender: UIButton) {
var originalPdfDocument: CGPDFDocument!
let urls = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let documentsDirectory = urls[0]
// read some pdf from bundle for test
if let path = Bundle.main.path(forResource: "Test", ofType: "pdf"), let pdf = CGPDFDocument(URL(fileURLWithPath: path) as CFURL) {
originalPdfDocument = pdf
} else { return }
// create new pdf
let modifiedPdfURL = documentsDirectory.appendingPathComponent("Modified.pdf")
guard let page = originalPdfDocument.page(at: 1) else { return } // Starts at page 1
var mediaBox: CGRect = page.getBoxRect(CGPDFBox.mediaBox) // mediabox which will set the height and width of page
let writeContext = CGContext(modifiedPdfURL as CFURL, mediaBox: &mediaBox, nil) // get the context
var pageRect: CGRect = page.getBoxRect(CGPDFBox.mediaBox) // get the page rect
writeContext!.beginPage(mediaBox: &pageRect)
let m = page.getDrawingTransform(.mediaBox, rect: mediaBox, rotate: 0, preserveAspectRatio: true) // Because of rotate 0, no effect ; changed rotate to 180, then get an empty page
writeContext!.translateBy(x: 0, y: pageRect.size.height)
writeContext!.scaleBy(x: 1, y: -1)
writeContext!.concatenate(m)
writeContext!.clip(to: pageRect)
writeContext!.drawPDFPage(page) // draw content in page
writeContext!.endPage() // end the current page
writeContext!.closePDF()
}
Note: This is a follow up of a previous thread,
https://developer.apple.com/forums/thread/688436
Post not yet marked as solved
I want graphics card details using objective c.
used IOServiceGetMatchingServices api for it, its working fine in Intel processor machine, but not returning model info for M1 machine. here is the code I was using
CFMutableDictionaryRef matchDict = IOServiceMatching("IOPCIDevice");
io_iterator_t iterator;
if (IOServiceGetMatchingServices(kIOMasterPortDefault,matchDict,
&iterator) == kIOReturnSuccess)
{
io_registry_entry_t regEntry;
while ((regEntry = IOIteratorNext(iterator))) {
CFMutableDictionaryRef serviceDictionary;
if (IORegistryEntryCreateCFProperties(regEntry,
&serviceDictionary,
kCFAllocatorDefault,
kNilOptions) != kIOReturnSuccess)
{
IOObjectRelease(regEntry);
continue;
}
const void *GPUModel = CFDictionaryGetValue(serviceDictionary, @"model");
if (GPUModel != nil) {
if (CFGetTypeID(GPUModel) == CFDataGetTypeID()) {
NSString *modelName = [[NSString alloc] initWithData:
(NSData *)GPUModel encoding:NSASCIIStringEncoding];
NSLog(@"GPU Model: %@", modelName);
[modelName release];
}
}
CFRelease(serviceDictionary);
IOObjectRelease(regEntry);
}
IOObjectRelease(iterator);
}
Post not yet marked as solved
I have a macOS app that captures screen images. The first time I run this application, a dialog is shown directing the user to give my app Screen Recording permission. Is there a way I can trigger this dialog earlier and detect whether the permission was granted?
Post not yet marked as solved
Objective and steps
Use the device front true depth camera (iPhone 12 Pro Max) to capture image data, live photo data and metadata (e.g. depth data and portrait effects matte) using AVFoundation capture principles into an AVCapturePhoto object. Save this captured object with its metadata to PHPhotoLibrary using a PHAssetCreationRequest object API.
Result
Image data, live data, disparity depth data (640x480 px) and some metadata is stored with the image through the PHPhotoLibrary API but the high quality portrait effects matte is lost.
Notes
Upon receiving the AVCapturePhoto object from AVFoundation capture delegate API I can verify that AVCapturePhoto object contains a high quality portrait effects matte member object. Using object's fileDataRepresentation() to obtain Data blob, writing that to a test file URL and reading it back I can see that flattened data API writes and restores the portrait effects matte.
However, it gets stripped from the data when writing through the PHPhotoLibrary asset creation request. When later picking the image e.g. with PHPickerViewController + PHPickerResult and peeking into the object's data with CGImageSourceCopyAuxiliaryDataInfoAtIndex() I can see that there is data dictionary only for key kCGImageAuxiliaryDataTypeDisparity, and kCGImageAuxiliaryDataTypeDepth and kCGImageAuxiliaryDataTypePortraitEffectsMatte are both missing.
Please, anyone has more detailed information if this possible at all? Thanks!
Post not yet marked as solved
I am using AVCapturePhoto to capture image. In didFinishProcessingPhoto i am getting image data using fileDataRepresentation. But when i convert this data to UIImage, it loses most of its metadata.
I need to draw bezier path on UIImage and still maintain metadata.
Is there any way to do this.
I have an array of CGPoint containing various coordinates. I need to apply the filter to x coordinates and y coordinates separately. I am not sure how to do this the Swift way so I unpack the coordinate using this way currently.
var xvalues: [CGFloat] = []
var yvalues: [CGFloat] = []
if (observation1.count) == 5{
for n in observation1 {
xvalues.append(n.x)
yvalues.append(n.y)
}
filter1 = convolve(xvalues, sgfilterwindow5_order2)
filter2 = convolve(yvalues, sgfilterwindow5_order2)
I am sure there is a more elegant way to do this. How to do this without unpacking the array?
Post not yet marked as solved
Steps to reproduce:
Download and open the attached "ARKitTest" project
Build and deploy the project to iOS
Reproduced with:2018.4.31f1, 2019.4.20f1, 2020.2.4f1, 2021.1.0b5, 2021.2.0a4
Reproducible with iPhone 12 Pro (iOS 14.2.1)
[Bug] iOS app crashes after some time (EXC_BAD_ACCESS) · Issue #716 · Unity-Technologies/arfoundation-samples (github.com)
https://github.com/Unity-Technologies/arfoundation-samples/issues/716
Unity Issue Tracker - [iOS 14] EXC_BAD_ACCESS crash from com.apple.arkit.ardisplaylink (unity3d.com)
https://issuetracker.unity3d.com/issues/ios-14-exc-bad-access-crash-from-com-dot-apple-dot-arkit-dot-ardisplaylink
Post not yet marked as solved
Hi. The following code creates and initialises a simple progress bar. How would I add a callback to update the progress bar ?
float prog = 0.5;
CFNumberRef cfNum = CFNumberCreate(kCFAllocatorDefault,kCFNumberFloatType, &prog);
const void* keys[] = { kCFUserNotificationAlertHeaderKey, kCFUserNotificationProgressIndicatorValueKey };
const void* vals[] = { CFSTR("Progress Bar"), cfNum};
CFDictionaryRef dict = CFDictionaryCreate(0, keys, vals,
sizeof(keys)/sizeof(*keys),
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFUserNotificationRef pDlg = NULL;
pDlg = CFUserNotificationCreate(kCFAllocatorDefault, 0,
kCFUserNotificationPlainAlertLevel,
&nRes, dict);