gpu: implement pl_tex_blit_compute
Simulates texture blits using compute shaders. Ended up becoming slightly annoying because of the ambiguity between imageLoad() and texture(). This could probably be way simplified by just having a texture() path. But I already wrote the other path, and it's probably better for the non-scaling case. (At the very least, it avoids hard-coding less floating point math)