I'm using PDFKitView: NSViewRepresentable to present the pdf page in SwiftUI. Seems we already have some useful built-in functions in the context menu. However, the highlight manipulation functions are not functional - I can neither delete the highlight annotation nor change the color/type of the current pointed highlight annotation.
The "Add Note" and other page display changing functions work well.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The demo code below presenting a view with a number incrementing per second.
The problem is:
Each time data updated in VM, the menu ( top line of MacOS) will also been refreshed, which will interrupt user's operation.
Here is the code:
*App.swift
struct MemoryTestApp: App {
@ObservedObject var appVM = VM()
var body: some Scene {
WindowGroup {
ContentView(vm: appVM)
.environmentObject(appVM)
}
.commands {
CommandMenu("Menu1") {
Button("Test Btn") {
print("Test Btn pressed!")
}
}
}
}
}
VM
class VM: ObservableObject {
@Published var count = 0
func startCount() async{
while 1 == 1 {
DispatchQueue.main.async {
self.count += 1
if self.count > 1000 {
self.count = 0
}
}
do{
try await Task.sleep(nanoseconds: 1_000_000_000)
}catch{}
}
}
}
View
struct ContentView: View {
@ObservedObject var vm: VM
var body: some View {
Text("Hello, world! \(vm.count)")
.frame(width: 100, height: 100)
.onAppear(perform: {
Task{await vm.startCount()}
})
}
}
MacOS Monterey
Xcode: 13.2.1
Hi,
Is it possible that we can do the same function as "template matching" in OpenCV, by using swift and vision framework?
I don't want to go to object recognition in ML because of the accuracy issue.
Hi, I'm using filter CIAreaMinMax to get the brightest and darkest color information from an image.
Normally the filter should output an image with two pixels (brightest and darkest). However, when I implement this filter to an image which contains two similar colors, then the result will be incorrect. The symptom of the incorrect result is, the two pixels' red channel has been switched, but G and B value have no problem.
The test image I am using is, an png image only contains two color:
RGB(37,62,88), GRB(10,132,255).
After processed by the code, it will output an image which contains two pixels:
RGB(10,62,88), GRB(37,132,255).
In below is the test code for swift playground:
import Cocoa
import CoreImage
import CoreGraphics
func saveImage(_ image: NSImage, atUrl url: URL) {
let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil)!
let newRep = NSBitmapImageRep(cgImage: cgImage)
newRep.size = image.size
let pngData = newRep.representation(using: .png, properties: [:])!
try! pngData.write(to: url)
}
var sourceImage = CIImage.init(contentsOf: URL.init(fileURLWithPath: "/Users/ABC/Downloads/test.png"))!
let filter = CIFilter(name: "CIAreaMinMax")!
filter.setValue(sourceImage, forKey: kCIInputImageKey)
let civ = CIVector.init(x: sourceImage.extent.minX, y: sourceImage.extent.minY, z: sourceImage.extent.width, w: sourceImage.extent.height)
filter.setValue(civ, forKey: kCIInputExtentKey)
var filteredImage = filter.outputImage!
let context = CIContext(options: [.workingColorSpace: kCFNull!])
let filteredCGImageRef = context.createCGImage(
filteredImage,
from: filteredImage.extent)
let output = NSImage(cgImage: filteredCGImageRef!, size: NSSize.init(width: filteredImage.extent.width, height: filteredImage.extent.height))
saveImage(output, atUrl: URL.init(fileURLWithPath: "/Users/ABC/Downloads/output.png"))
Hi,
Let's say we have a fixed size window, which will show a scrollable image.
Now I want to create a feature which can let user to scale the image, and can scroll and view the scaled image inside the window.
The problem is, once I scaled up the image, the scroll area seems not large enough to cover the whole image.
struct ContentView: View {
var body: some View {
ScrollView([.horizontal, .vertical] , showsIndicators: true ){
Image("test")
.scaleEffect(2)
}
.frame(width: 500, height: 500)
}
}
When scaleEffect(1), everything is fine. if scaleEffect(2), the image will be twice large, but I cannot reach to each corner of the scaled image.
Thanks in advance!
Hi,
I am testing the CIAreaMinimum filter in swift, for MacOS platform.
The problem I meet is: in run mode in Xcode, the code works fine. But when I open the app path and try to run it independently, the filter function will not work.
As the demo code show, if running in Xcode, after pressing the button, there will be a color block shown in interface. If build and independently execute the app, pressing the button will not trigger anything, no color block, no crash or any error popup.
Appreciate your help!
macOS: 10.15.6
Xcode: Version 12.2 (12B45b)
struct ContentView: View {
@ObservedObject var testvm = testVM()
var body: some View {
VStack{
Button(
action: {testvm.ShowMinimun()},
label: { Text("Button")
})
Image(nsImage: testvm.origin.ToNSImage())
Image(nsImage: testvm.result.ToNSImage()).scaleEffect(100)
}
}
}
class testVM: ObservableObject {
@Published var origin: CIImage = CIImage.init()
@Published var result: CIImage = CIImage.init()
init(){
guard let url = Bundle.main.url(forResource:"eye", withExtension:"png") else {
return
}
origin = CIImage(contentsOf: url)!
}
func ShowMinimun() {
let filter = CIFilter(name: "CIAreaMinimum")!
filter.setValue(origin, forKey: kCIInputImageKey)
filter.setValue(origin.extent, forKey: kCIInputExtentKey)
let filteredImage = filter.outputImage
result = filteredImage!
}
}