I am working on a custom editable text view recently and I would like to adopt new iOS 13 selection gestures through the newly added UITextInteraction.
I started by implementing a subclass of UIView and making it conforming to the UITextInput protoocol. So far I have implemented all code for converting between selection range and screen point (hit-testing and rectangle for range). I have manually rendered selection and caret. I also use a tap gesture recognizer to update caret position whenever a tap happens.
Then, I proceed to follow the session on UITextInteraction, following the three lines of code suggested there:
let interaction = UITextInteraction()
interaction.textInput = myTextView
myTextVIew.addInteraction(interaction)
Strangely, there seems to be no effect when I trying to playing around on the iOS simulator. Which piece of code should I look for any troubleshooting?
A separate but closely related issue: which visual features are automatically provided when UITextInput protoocl is implemented correctly? Is it true that I have to use my own tap gesture recognizer to do the click-to-change-caret workflow? It feels something that can be handled by the system. Plus, is there some Apple-backed library to draw caret and selection, including the two vertical bars with balls boundaries for selection? I could draw them myself to make it mimick the system’s default visual look but it feels very anti-pattern.
It turns out that my very mistake is to use my own gestrue recognizer. It will somehow interfere with UITextInteraction’s. I am not quite familiar with the whole mulitple gesture recognizer, but I seem to have much to learn.
Furthermore, the adoption of UITextInteraction will relieve me from drawing my own caret, selection, et cetera. So this is really great.