So, I want to implement the following idea: open the image, read pixel-by-pixel, encrypt RGB pixel data with some text key, draw up an image pixel-by-pixel with the modified RGB data.

The final image (naturally meaningless in appearance) may end up with a slight resizing or being subject to JPEG compression artifacts within reasonable limits. From what I conclude that no block data encryption algorithms in this case will work, and encryption regardless of each pixel will be too weak.

Any ideas?


Update. If you do not touch the problem of resizing, it turns out beautifully:

SourceEncryptedDecrypted

The problem is that the encrypted image is subject to strong distortions when resizing, JPEG compression and others, since in fact this kind of graphic image must be stored in the bitmap (which makes this approach not acceptable in modern realities). It remains to try to increase the encrypted image of two to three, so that each individual pixel has an area of ​​2 or 3 pixels. And before debugging, look for the algorithm of the most successful return to the original size.

  • @trashmajor, what do you want to achieve? Encrypt a file (in one of the well-known formats) with an image so that the image becomes different , but the image remains in the same format? And after possible editing (the encrypted image) could the original be restored? - avp
  • There is a certain image that you publish in the public domain. Those who have the key will be able to see the original image. - trashmajor
  • A simplified little version of my task:) I observe with interest. - Sergiks
  • The color coding of the pixels of the Hénon sequence en.wikipedia.org/wiki/H%C3%A9non_map gave nothing, as soon as we save the image in jpeg format, the slightest fluctuations in the color of the pixel greatly change the image after decoding. Lossless encrypted only bitmap unchanged. Changing the image size is not subject to decryption. - trashmajor
  • Found material on the topic (pdf in English) - Sergiks

3 answers 3

Exactly in your case: you can do by mixing pixels, or better than the average colors of areas of width/N x height/N The code word defines the algorithm for “traversing” the image field, so that in a finite number of steps to cover the entire image, having been anywhere at least once (maybe several). And it’s trivial to change areas in pairs every two steps. If the word will determine the "vectorial" traversal algorithm - without being tied to exact pixels, then resizing the image should not greatly degrade the result of the decryption.

Other thoughts: It is necessary to manage to keep several “layers” of data in the image: from large features to small details. Then, if you change the size or the hellish compression, for example, very small details will disappear, but larger signs and the essence of the image will remain.

Google about frequency decomposition of images . For example, with two frequencies, the original is decomposed into two images of the same size:

  1. The original picture is strongly blurred (small details go away) - this is a low-frequency channel.
  2. the difference between the original and the blurred one - a gray image - contains only small details.

Putting these two images in a certain way, the original, pixel-by-pixel is obtained again. In retouching, for example, you can easily get rid of freckles on the skin, highlighting them in a separate frequency channel and smearing it in solid gray.

So, intuition suggests that:

  • you need to decompose the image in frequencies and encrypt them separately == resistance to compression / reduction;
  • The encrypted image must contain overlay of several layers of information simultaneously
  • The size of the signs correlates with the blur radius of this layer.

For Processing, there was such an example of the "pointilism" effect: random square areas are selected in the original image, the average color is taken, and a semitransparent circle of this color is drawn in the new image, inscribed in that square. And they are drawn over and over, in different sizes. As the number of these seemingly random circles of random colors grows, the original picture begins to emerge.

    And if you save the data about the color of each pixel in the image, make it b / w and subject it to encryption. Color data is stored in the header or at the end of the image file.

    Look at this example if you own C ++ link text

    • And what does the branch of color give me? - trashmajor
    • When compressing in JPG, there are less losses, although I may be mistaken - yalex1

    The image is crammed into base64 , and base64 encrypted into TrueCrypt or the like (for example, anubis ).

    Or directly encrypt immediately, if this is possible.

    Well, in the end to look on the Internet, as the craftsmen to the pictures added archives. With all this, it must be remembered that most of the picche stores store pictures.

    • Note that the question just indicates that the picture should be decrypted even after resizing or with the appearance of compression artifacts. - trashmajor
    • @trashmajor noticed. something will think. - lampa
    • I decided to encode the color of the pixels. I generate pseudo-random numbers with such a function k = 1-a * (raise to the power of (k, 2)) + b * k, where k = a pseudo-random number, and parameters a and b are set in advance. I can not find a way to get numbers from 0 to 255 from this function so that when XOR colors of pixels are created, the effect is really random values. - trashmajor