After searching Google found the following functions:

  1. gluSphere(...)
  2. gluCylinder(...)

However, these functions do not work in ES, as I understand it, in ES there is a possibility to draw only points, lines and triangles.

I have seen examples where a cube is drawn: 6 faces, each of which consists of 2 triangles. A circle, a ring, a rectangle with rounded corners, and everything else are drawn in the same way from triangles.

I came to the conclusion that the sphere and the cylinder should also be drawn from triangles.

For a cylinder: Two circles from GL_TRIANGLES_FAN - the base of the cylinder, a lot of "narrow" rectangles GL_TRIANGLES_STRIP - for the lateral surface of the cylinder - the more - the more beautiful and less efficient.

For a sphere: it’s harder here, I think you can make GL_TRIANGLES_STRIP out of multiple squares (two triangles per square) - again, the more such squares, the smoother the sphere should be.

Questions only 2:

  1. Is the correct approach to drawing 3D bodies from simple triangles or is there a more correct way?
  2. Will there be problems when applying textures to such bodies, or how to apply textures? :)

Thanks for answers!

    1 answer 1

    Is the correct approach to drawing 3D bodies from simple triangles or is there a more correct way?

    Taking into account that in your case they will be drawn in smartphones in which video cards do not have high performance compared to desktop ones, then triangles are the only true option.

    In addition to the triangles, there are ray tracing algorithms (I don’t know how to translate into Russian) which are based on the method of generating images by emitting a ray from the observation point to the scene, and checking for the intersection of each body with this ray, for example for a sphere it would look like this :

    1. for each pixel we throw a ray from the observation point (from the camera position)

    2. for each object from the scene we find the point of intersection with the ray

    3. if the point closest to the camera is calculating the end color of the pixel at that point

    the algorithms are implemented in a fragmentary (or pixel) shader that requires a large amount of computation, so even on desktop machines large computational power is needed, not to mention smartphones ...

    Will there be problems when applying textures to such bodies, or how to apply textures? :)

    The first part of the question is not a bit correct, since problems of course can arise if something is done wrong. With regards to the second part of the question - roughly speaking, the texture is applied using texture coordinates, which must be defined for each point of the body. Just like for the other vertex attributes, a buffer is allocated for them, or they are pushed into an already existing buffer with other attributes, the image is read, associated with this buffer, then sent to the vertex shader through which the fragment receives the current coordinates.

    example of a typical vertex and fragment shader for OpenGL ES

     // vertex shader precision highp float; attribute vec4 vPosition; attribute vec2 vtexCoord; varying vec2 texCoord; uniform mat4 mvp; void main() { gl_Position = mvp * vPosition; texCoord = vtexCoord; } // fragment shader precision highp float; varying vec2 texCoord; uniform sampler2D texture; void main() { gl_FragColor = texture2D(texture, texCoord); }