**

Inventing a bike or shooting ducks with a webcam

** Aimduck

The idea of ​​creating such a mini-game has been warming ever since I first heard about how to build a pistol for a dandy. The whole point was that there was only a single photoresistor, and at the moment of the shot the whole screen was painted black, and the remaining white spot was a duck. Those. hit light, miss dark.

Using c # + xna, my idea ended in a complete crash, namely, the player sees the mark for a long time. This is actually the essence of my question. How can this be avoided? The problem is not exactly in the processing of photos, perhaps in getting the bitmap from the camera (although it takes not too long), it is possible that a significant amount of time also passes between drawing on the XNA screen of the draw method. I will try to bring the source code into a more beautiful view, if someone decides to help with the question.

So the prototype is simple. The game menu, in which it does not allow further play, if the user does not have a single connected webcam and the game itself, where ducks fly in an infinite loop, if the duck flew past the screen and was not shot down - some action (my deer laughs laughing ).

The main consideration was the definition of the label. I took aforge as a basis for getting a picture from the camera. On Stack, my question was not successful. The hit recognition algorithm itself was laid down as follows:

  1. Transformation of bitmap from camera to monochrome image. enter image description here
  2. In a certain area from the center, we look for white spots and select each of them with rectangles.
  3. We are looking for rectangles inside which there is the same rectangle. (Here it was possible to check the distance from the center and finish, since this method gave 90% accurate recognition of misses).
  4. Check that the matrix of colors in the rectangle corresponds to light-dark-light-dark-light.
  5. Check how far from the center of the photo - the center of the rectangle. Those. how much we aim at the duck. We give the result.

The recognition logic is in the CenterTest class, and, accordingly, the project itself can be taken on the same github.

Zip binary

    1 answer 1

    The project is interesting.

    I did not look at the code, but I have some thoughts ...

    I would advise, in principle, to take from the camera image only the central square of 50x50 pixels, for example. Translated into BW. Then you measure the amount of white that falls into this area. This can significantly speed up the processing of the image of the camera.

    Option 2: you need to use OpenCV to look at the image of a dark spot in white. But this will most likely be a more resource-intensive way.

    As for slow rendering ...

    Probably, it makes sense to try drawing it all out of shape, but, say, in a unit. Or optimize the rendering with the buffer in which you load the frame in advance. You show it and then immediately return it back.