Recently experimented with this. In short, you need to force the video card to render the shader to RenderTexture using Blit() .
- In the editor, create material with your shader.
- In the script we take this material.
- In the script, create or take the original texture if needed (in the shader we see
tex2D() , which means we need it, obviously). - In the script, create an instance of
RenderTexture desired size and format. In Update() (or when you need it there) we call:
Graphics.Blit (dataTexture, renderTexture, computeMaterial);
where dataTexture is the source texture from clause 3, renderTexture is the target texture from clause 4, computeMaterial is the material from clauses 1-2.
Next you need to read the result from the renderTexture with the right pixels.
The choice of format for the source and target textures is a somewhat tricky question in itself due to the different support of different formats by different video cards. I used Alpha8 for the original, as I needed only one channel with the smallest amount of information transmitted; and ARGB32 for the target.
If you need to get data from a shader and at the same time use it to display a picture on a monitor, then, as I understand it, you’ll have to render it two times: one normal and one via Blit() . Or you can RenderTexture on the model, especially if it's just a plane.
If only computation is needed, and there is no need for support for devices that cannot DX11 (there are still a lot of such, as I understand it), then it is simpler and wiser to use computational shaders .