I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text.
However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer.
//
// TVView.swift
// Mistery
//
// Created by Joje on 31/07/25.
//
import CoreImage
import CoreImage.CIFilterBuiltins
import UIKit
import AVFoundation
final class TVView: UIView {
// propriedades animacao texto
private var textAnimationTimer: Timer?
private var fullTextToAnimate: String = ""
private var currentCharIndex: Int = 0
// propriedades video estatica
private var player: AVQueuePlayer?
private var playerLayer: AVPlayerLayer?
private var playerLooper: AVPlayerLooper?
var onNextButtonTap: () -> Void = {}
// MARK: - Subviews
// imagem da TV
private(set) lazy var tvImageView: UIImageView = {
let imageView = UIImageView()
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.image = UIImage(named: "tvFinal")
imageView.contentMode = .scaleAspectFit
return imageView
}()
// texto que passa dentro da TV
private(set) lazy var storyLabel: UILabel = {
let label = UILabel()
label.translatesAutoresizingMaskIntoConstraints = false
//label.backgroundColor = .gray
label.textColor = .red
label.font = UIFont(name: "MeltedMonster", size: 30)
label.textAlignment = .left
label.numberOfLines = 0
label.text = ""
return label
}()
private(set) lazy var nextButton: UIButton = {
let button = UIButton(type: .system)
button.translatesAutoresizingMaskIntoConstraints = false
//button.backgroundColor = .darkGray
button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside)
return button
}()
// MARK: - Lifecycle
override init(frame: CGRect) {
super.init(frame: frame)
backgroundColor = .black
setupVideoPlayer()
addSubviews()
setupConstraints()
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18)
setupFisheyeEffect()
}
private func setupFisheyeEffect() {
// cria o filtro
guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")}
storyLabel.layer.shouldRasterize = true
storyLabel.layer.rasterizationScale = UIScreen.main.scale
// define os parametros
filter.setDefaults()
// centro do efeito
let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY)
filter.setValue(center, forKey: kCIInputCenterKey)
// raio de distorção
filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey)
// intensidade de distorção
filter.setValue(7, forKey: kCIInputScaleKey)
storyLabel.layer.filters = [filter]
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
// MARK: - Button actions
@objc private func didPressNextButton() {
onNextButtonTap()
}
@objc private func animateNextCharacter() {
guard currentCharIndex < fullTextToAnimate.count else {
textAnimationTimer?.invalidate()
return
}
let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex)
let partialText = String(fullTextToAnimate[...currentTextIndex])
storyLabel.text = partialText
currentCharIndex += 1
}
public func updateStoryText(with text: String) {
textAnimationTimer?.invalidate()
storyLabel.text = ""
fullTextToAnimate = text
currentCharIndex = 0
textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true)
}
// MARK: - Setup methods
private func setupVideoPlayer() {
guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else {
print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov")
return
}
let playerItem = AVPlayerItem(url: videoURL)
player = AVQueuePlayer(playerItem: playerItem)
// LINHA COM POSSIVEL ERRO
playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem)
playerLayer = AVPlayerLayer(player: player)
playerLayer?.videoGravity = .resizeAspectFill
if let layer = playerLayer {
self.layer.addSublayer(layer)
}
player?.play()
}
private func addSubviews() {
self.addSubview(storyLabel)
self.addSubview(tvImageView)
self.addSubview(nextButton)
}
private func setupConstraints() {
NSLayoutConstraint.activate([
// TV Image
tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor),
tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor),
tvImageView.widthAnchor.constraint(equalTo: widthAnchor),
// TV Text
storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50),
storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35),
storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42),
//TV Button
nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25),
nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190),
nextButton.widthAnchor.constraint(equalToConstant: 100),
nextButton.heightAnchor.constraint(equalToConstant: 160)
])
}
}
#Preview{
ViewController()
}
General
RSS for tagDelve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm experiencing a specific issue where when using any of the MacOS 26 Tahoe betas with Low Power Mode enabled and using Vsync in fullscreen, my application framerate gets limited to a hard 30 fps. I have not experienced this on any older OS. For example Low Power Mode on 13.6 Ventura with Vsync fullscreen lets my application run at full 60 fps without issues.
Is this a bug or a change in behavior of Low Power Mode on Tahoe?
My application is 3D, runs at 60 fps and is sensitive to tearing, so I need Vsync and it is mostly utilized in fullscreen. And Low Power Mode is a default for many Macs, so default experience on Tahoe currently is a halved 30 fps. However there also seems to be inconsistencies of on which machines this happens, but older OSes are always fine.
Hi,
I am a Multimedia and Graphics researcher and I am wondering if OpenGL API and drivers will be removed after appleOS 26?
macOS 26
iOS 26
iPadOS 26
visionOS 26
I am asking this because most of the libraries I use depends on OpenGL. Like CGAL, libigl, immediate mode ui, nanovg, nanogui, bullet physics. Transitioning from Vulkan and metal while using and learning those libraries is just not viable.
I would like to ask you that. I am the sole developer and I just want to ask you that.
Regards.
Topic:
Graphics & Games
SubTopic:
General
I haven't been looking at screensavers for a long time because of Apple's lack of will (or resources?) to provide a public version of the private modern SDK used by Apple for a very long time now.
I'm now looking at the Screen Saver pane in System Settings (the What-If version of System Preferences in an alternate universe where all screens are in portrait mode).
In macOS Sequoia, it seems like 3rd party screensavers are not welcome considering that they are relegated to the "Other" section at the bottom of the list and you have to click Show All to start seeing 3rd party screen savers.
I also had a quick look at macOS Tahoe Beta 3 and it looks like that all the real screensavers are gone (3rd party and the ones from Apple: Hello, Message, Flurry, etc.) or at least it requires to be a Nobel Prize to find them (and the Search field is not useful).
I tried to install a 3rd party screen saver on macOS Tahoe Beta 3, it doesn't show up in the list.
To summarize:
No public access to modern APIs AFAIK.
UI that is hostile to 3rd party screen savers on macOS Sequoia.
Apparently only screensavers that are slideshows or movies curated by Apple in macOS Tahoe b3.
Hence the question:
Is there any future for screen savers on macOS?
Because if there's none, I won't waste my time trying to update some old screen savers.
In swipe-driven games, a first downward swipe starting near the home indicator can trigger Reachability, even when using preferredScreenEdgesDeferringSystemGestures = .bottom and prefersHomeIndicatorAutoHidden = true. This causes the app to slide down to half screen and breaks gameplay. Please consider an API to temporarily defer Reachability while a custom gesture is active (similar to existing system gesture deferral), without disabling accessibility globally.
Environment:
Devices: iPhones with Home Indicator (Face ID)
Why this matters:
Bottom-origin swipes are core in many games (flick shots, slingshot, physics toss, bottom sheets). Current workarounds degrade UX and discoverability, and players still accidentally trigger Reachability.
Feedback Assistant Post
Topic:
Graphics & Games
SubTopic:
General
Hello,
We are experiencing an issue with apparent lag or latency when interacting with objects in Unity using the XR Interaction Toolkit. Even when interactables are configured with the Movement Type set to Instantaneous, objects exhibit a noticeable delay when following hand movement. This issue is particularly apparent when the hand moves at higher speeds.
We would like to know: is any additional configuration required, or is there something else we need to do to resolve this?
Thank you very much for your assistance.
I want to display the refresh rate on the screen of the iPhone (14 Pro Max), but each app of the App Store's
"Floating Clock", "Floating Clock Premium" and "Screen Renewal Rate - RefreshRate Realtime" can only display two of 60Hz and 120Hz, and I wonder if it can display numbers of 1Hz, 10Hz, tens of Hz, or more than 3 levels. Does anyone know how to do it?
Topic:
Graphics & Games
SubTopic:
General
When using simd_inverse functions, the same input yields different results on different devices.
On the Mac mini with M4 pro and M3 ultra, the result is
But on the Mac mini of M2 ultra, the result is
HI all,
I'm in the early stages of testing a game that will have real-time device-to-device communication using GameCenter.
I've read something about setting up GameCenter "sandbox" accounts to test prior to the game's release, but I can't find anything reliable about how to do so.
I've found lots of web pages which purport to describe how to do this task, but what they describe seems to be outdated, as the steps don't appear to be available in modern versions of iOS. I'm testing on iOS 18 and 26.
Document search isn't helping--I mostly find information on how to set up StoreKit sandbox accounts, which I assume are different things altogether--so if you could point me towards an article that allows me to test a new, unreleased GameKit app between devices, I'd appreciate it.
People are developing GameCenter apps, so I know it's possible...
advTHANKSance
Topic:
Graphics & Games
SubTopic:
General
Dear Apple Developer Support,
I hope this message finds you well. I am a game developer looking to integrate Game Center into our game. Before proceeding, I would like to understand the analytical capabilities available post-integration. Specifically, I am interested in tracking detailed metrics such as:
The number of new player downloads driven by Game Center features (e.g., friend challenges, leaderboards, or achievements).
Data on user engagement and conversions originating from Game Center interactions.
I have explored App Store Connect’s App Analytics but would appreciate clarification on whether Game Center-specific data (e.g., referrals from challenges or social features) is available to measure growth and optimize our strategies.
If such data is accessible, could you guide me on how to view it? If not, are there alternative methods or plans to incorporate this in the future?
Thank you for your time and assistance. I look forward to your response.
Best regards,
Bella
Hi, I trying to use Metal cpp, but I have compile error:
ISO C++ requires the name after '::' to be found in the same scope as the name before '::'
metal-cpp/Foundation/NSSharedPtr.hpp(162):
template <class _Class>
_NS_INLINE NS::SharedPtr<_Class>::~SharedPtr()
{
if (m_pObject)
{
m_pObject->release();
}
}
Use of old-style cast
metal-cpp/Foundation/NSObject.hpp(149):
template <class _Dst>
_NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj)
{
#ifdef __OBJC__
return (__bridge _Dst)pObj;
#else
return (_Dst)pObj;
#endif // __OBJC__
}
XCode Project was generated using CMake:
target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20)
target_compile_options(${MODULE_NAME}
PRIVATE
"-Wgnu-anonymous-struct"
"-Wold-style-cast"
"-Wdtor-name"
"-Wpedantic"
"-Wno-gnu"
)
May be need to set some CMake flags for C++ compiler ?
Hi,
I face a problem that I could not scan a specific Code 39 barcode with Vision framework. We have multiple barcode in a label and almost all Code 39 can be scanned, but not for specific one.
One more information, regardless the one that is not recognized with Vision can be read by a general barcode scanner.
Have anyone faced similar situation?
Is there unique condition to make it hard to scan the barcode when using Vision?(size, intensity, etc)
Regards,
Does anyone know why the following call fails?
CGPDFOperatorTableSetCallback(operatorTable, "ID", &callback);
The PDF specification seems to indicate that ID is an operator?
BTW what is the proper topic/subtopic for questions about Quartz? Wasn't sure what topic on the new forums to post this under.
I am currently working on a project where I aim to overlay the camera feed obtained via the Apple Vision Pro's camera access API to align perfectly with the user's perspective in Vision Pro.
However, I've noticed a discrepancy between the captured camera feed and the actual view from the user's perspective. My assumption is that this difference might be related to lens distortion correction or the lack thereof.
Unfortunately, I'm not entirely sure how the camera feed is being corrected or processed. For the overlay, I'm using a typical 3D CG approach where a texture captured from the background plane is projected onto a surface. In this case, the "background capture" is the camera feed that I'm projecting.
If anyone has insights or suggestions on how to align the camera feed with the user's perspective more accurately, any information would be greatly appreciated.
Attached image shows what difference between the camera feed and actual user's perspective field of view.
I want to align the camera feed image to the user's perspective.
The update was the only change I can see. Which I did just the other day. I did log on to iCloud.com, I also looked at Apple Developer. I didn't see any additional updates of terms that needed to be accepted. I am sure to log into iCloud on the simulator. It seems to stay logged in. Until I fetchSavedGames. Where I have it exit at 20 seconds due to timing out. Then when I go back to check my account in Settings, it's asking to "Sign in to iCloud" again. It does work properly on a device. So it doesn't stay logged into iCloud on the simulator but it seems like the fetchSavedGame from GKLocalPlayer is what resets that. Any help or suggestions would be appreciated. Thanks.
Hi everyone,
I've been working with the autoAdjustmentFilters provided by Core Image, which includes filters like CIHighlightShadowAdjust, CIVibrance, and CIToneCurve. However, I’ve noticed that the results differ significantly from the "Auto" enhancement feature in the Photos app. In the Photos app, the Auto function seems to adjust multiple parameters such as contrast, exposure, white balance, highlights, and shadows in a more advanced manner.
Is there an API or a framework available that can replicate the more sophisticated "Auto" adjustments as seen in the Photos app? Or would I need to manually combine filters (like CIExposureAdjust, CIWhitePointAdjust, etc.) to approximate this functionality?
Any insights or recommendations on how to achieve this would be greatly appreciated. Thank you!
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc.
The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions:
return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); }
arguments:@[src]
];
The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate:
float4 backwards(sampler image, destination dest)
{
float2 dc = dest.coord();
dc.x = image.size().x - dc.x;
return image.sample(image.transform(dc)));
}
When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large.
All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing.
I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet.
Any thoughts/input appreciated, thanks!
Technical Issue Report for Maple Tale App - Audio Format Compatibility
Dear Apple Technical Support Team,
I hope this message finds you well. My name is [Your Name], and I am part of the development team behind the Maple Tale app. We have encountered an issue with audio format compatibility within our app that we believe requires your assistance.
The issue pertains to the audio formats supported by our app. Currently, our app only supports WAV and OGG formats, which has led to a limitation in user experience. We are looking to expand our support to include additional formats such as MP3 and AAC, which are widely used by our user base.
To provide a clear understanding of the issue, I have outlined the steps to reproduce the problem:
Launch the Maple Tale app.
Proceed with the game normally.
Upon picking up equipment within the game, a warning box pops up indicating the audio format compatibility issue.
This warning box appears due to the app's inability to process audio files in formats other than WAV and OGG. We understand that this can be a significant hindrance to the user experience, and we are eager to resolve this as quickly as possible.
We have reviewed the documentation available on the official Apple Developer website but are still seeking clarification on the best practices for supporting a wider range of audio formats within our app. We would greatly appreciate any official recommendations or guidelines that could assist us in this endeavor.
Additionally, we are considering updating our app to inform users about the current audio format requirements and provide guidance on how to optimize their audio files for the best performance within our app. If there are any official documents or resources that we should reference when crafting this update, please let us know.
We appreciate your time and assistance in this matter and look forward to your guidance on how to best implement audio format support on the iOS platform.
Thank you once again for your support.
Warm regards,
I want to use Swift language to write code for drawing multiple polygons. I would like to find some examples as references. Can anyone provide example code or tell me where I can see such examples? Thank you!
We have a pixel buffer pool managed by the system(created using CVPixelBufferPoolCreate API). And each time when we need a pixel buffer, we call CVPixelBufferPoolCreatePixelBuffer to created one from the pool. Then we override all pixels of the buffer, getting IOSurface from the buffer, and then set the IOSurface as CALayer's contents property in another process to show it, everything works fine.
Now we want to do some optimization by only override pixels that's changed between frames. The way we'd like to do is that after we call CVPixelBufferPoolCreatePixelBuffer to create a buffer, we get the underlying IOSurface id map it with a frame info. Next time if we get the same IOSurface id, we just compare the current frame info with the one we stored and only update the changed pixels in CVPixelBuffer.
However, there is no document mentioning whether the CVPixelBuffer created using CVPixelBufferPoolCreatePixelBuffer will contain previous pixels(content before it's returned to the pool). Do we have this guarantee? If not, is there any way we can know whether the created buffer contains the previous pixels or not?