Hello all,
I am designing a game for iPhone devices, including the SE, 7, and 7+ sizes. I have been tackling screen resolution and scaling issues for some time now, and I think I have all but one problem fgured out.
I'll use a background image for this example (portrait that takes up the entire screen). I create my scene with the following code,
SKScene * scene = [BeginScene sceneWithSize:CGSizeMake(375, 667)];
scene.scaleMode = SKSceneScaleModeAspectFill;I can design my app for the screen size of a 7 and it will scale up to 7+ sizes and down to SE sizes. If I want a node to be placed in the center of the screen, I set its position to (187.5, 333.5). I also create the image with dimensions of 1126x2002. Then I scale it down from @3x to @2x and @1x. This way I can essentially create one image with three resolutions, and use one game design and it works on all iPhones.
Here's the question. Should I instead design the graphics with dimensions of 1242x2208 and set the SKScene size to 414x736, that way all graphics are saled down (thus looking best on all devices)? And the second question, is this an acceptable and profetional way of creating games wih SpriteKit? Or am I going about this all wrong?
Note: My current systetm is workign well, I just want to check to see if there is a better way of doing things
Thanks a ton!
-Xhale
I don't know of any API contract that will tell you the correct answer.
Conceptually, two scalings are involved:
1. From the raw pixels to the scaled pixels at the resolution required for your scene at 100% (in relation to Sprite Kit).
2. From your scene's pixels at 100% to the scaled pixels at the actual resolution implied by your Aspect Fill mode.
There may also be another one, behind the scenes:
3. From the high-resolution screen backing store down to the true hardware pixel resolution.
But you can't really tell how many actual scalings will occur, because this (AFAIK) is not documented. It may also depend on the hardware capabilities (e.g. Metal vs OpenGL) and the iOS version.
Personally, if I wanted the best quality display for a game deployed across multiple screen sizes, I wouldn't use Aspect Fill at all, but let the game scene be the size of the screen. From that, I'd calculate the sizes of all visual elements in game scene coordinates, and render hi-res assets into textures or nodes of the correct sizes (letting the frameworks handling the choice of 1x, 2x or 3x resolution by specifying logical image sizes only). In effect, that means there's no further scaling after textures are created. Still, this is harder to implement than it sounds, especially if you want your game to resize on rotation between portrait and landscape.
Within the context of your current approach, my opinion would be to design the assets at the equivalent of 3x for the largest screen you support. It may not matter much, but scaling down typically looks better than scaling up. Then you have to decide what memory footprint implications this has, and modify your approach accordingly.
You might also try just looking at your app on a 7 and a 7+, side by side. If the one that's actually scaled (the 7+, if I've understood properly) looks fine, I don't think you need to change your current methodology.