Issue with the GLSL Compiler on M1 OpenGL driver?

Hello,

I am doing a cross-platform project that uses C++ and OpenGL ( I know I should be using MoltenVK or Metal, but OpenGL is nice and simple for starting out and is cross platform). I am also doing most of my development on a M1 Macbook Pro, which supports up to OpenGL 4.1.

The M1 also only supports up to 16 active fragment shader samplers ( maximum number of supported image units)

I am currently working on a batch rendering system that uses an array of textures thats uploaded to the GPU and the shader can switch based off of the index into a sampler array. Heres the shader that I am using ( the vertex and fragment shaders are combined, but the program parses them separately) :

#type vertex
#version 410 core

layout(location = 0) in vec3 a_Position;
layout(location = 1) in vec4 a_Color;
layout(location = 2) in vec2 a_TexCoord;
layout(location = 3) in float a_TexIndex;
layout(location = 4) in float a_TilingFactor;

uniform mat4 u_ViewProjection;

out vec4 v_Color;
out vec2 v_TexCoord;
out float v_TexIndex;
out float v_TilingFactor;

void main()
{
    v_Color = a_Color;
    v_TexCoord = a_TexCoord;
    v_TexIndex = a_TexIndex;
    v_TilingFactor = a_TilingFactor;
    gl_Position = u_ViewProjection * vec4(a_Position, 1.0);
}

#type fragment
#version 410 core

layout(location = 0) out vec4 color;

in vec4 v_Color;
in vec2 v_TexCoord;
in float v_TexIndex;
in float v_TilingFactor;

uniform sampler2D u_Textures[16];

void main()
{
    color = texture(u_Textures[int(v_TexIndex)], v_TexCoord * v_TilingFactor) * v_Color;
}

However, when the program runs I get this message: UNSUPPORTED (log once): POSSIBLE ISSUE: unit 2 GLD_TEXTURE_INDEX_2D is unloadable and bound to sampler type (Float) - using zero texture because texture unloadable

I double and triple checked my code and im binding everything correctly to the shader (if im not feel free to point it out :), and the only thing I found on the web relating to this error was saying that it was an error within the GLSL compiler on the new M1s. Is this true? Or is it a code issue?

Thanks

side note: I am using EMACS to run Cmake and do C++ development, so if you try and test my project on Xcode and it doesnt include the shaders its most likely a Cmake/Xcode copy issue.

Post not yet marked as solved Up vote post of trzroy Down vote post of trzroy
5.5k views
  • I am encountering this same problem on my M1 Pro MacBook.

  • I have the same issue on a new MAC Mini - since it seems to work fine on other systems, it seems like a compiler issue on the M1s.

Add a Comment

Replies

Did you manage to solve this problem?

  • Nope. I ended up using a bunch of if statements to get it run properly

Add a Comment

I found the same problem when play some video into OpenBoard App showing black screen instead. May be same problem driver issue.

MacOs Monterrey v12.5.1 OpenBoard v1.6.2

  • Could be. I wouldnt blame Apple though since some AMD OpenGL drivers do the same thing

Add a Comment

show me how u bind? I can load different textures in shaders

Hi! If it's not too late, I think I figured out what all this means and it does appear to be real and not just some garbage being thrown by the M1 GPU.

unit 2 GLD_TEXTURE_INDEX_2D is unloadable and bound to sampler type (Float)

This error is thrown whenever the shader does not know where to access texture data. Basically, you have to make two calls for things to work out correctly.

  1. you need to call glUniform1i(glGetUniformLocation(programID, "u_Textures[" + std::to_string(i) + "]"), slot), where that string is the name of the variable with some index i (u_Texture[i]), and the "slot" is one of the 16 locations where OpenGL can temporarily load textures to be accessed by the shader
  2. you need to call glActiveTexture(GL_TEXTURE0 + slot), where slot is the same slot as in step one. If the slots don't match, the above error will be thrown.

Most of the time when I get this error it is because I somehow messed up the coordination of those two calls, because it can get a little messy sometimes. It can also be called if you didn't set up your texture correctly (maybe forgot to bind when you were setting up the filter and wrapper or something), but I have a feeling this was not the case here.

What I think happened here has to do with the fact that you are declaring an array of textures and we are seeing "unit 2" as opposed to "unit 0." From what I have personally tested, the "unit" is not the same as the texture object id we get from openGL's glGenTextures(1, &textureID) nor the same as the "slot" from above. Instead, my hypothesis is that it is simply a stack created by openGL keeping track of the textures that have been bound.

Assuming that is correct, the reason why your program works is that the 0th and 1st units are properly accessible to the shader but the 2nd one isn't. My assumption here is that at the time this error was thrown, you weren't actually using all 16 of the textures in that array and didn't set the uniform data for them either, instead just binding the first 2 slot uniforms. When the shader tried to load textures 2+, it failed and threw the above error once ("log once"). Even if that doesn't affect your shader code, the error will be thrown anyway when a draw function is called on the shader.

If you are bothered by this, I have had similar issues in the past and it is apparently good enough to just set the empty sampler slots to a different, already used slot, even though that probably wouldn't work if you tried to use them for anything. Alternatively, you could create an empty texture and bind the unused textures to that.