📜 ⬆️ ⬇️

OpenSceneGraph: Texture Basics

image

Introduction


We have already considered an example where we painted a square in all the colors of the rainbow. Nevertheless, there is another technology, namely the application to the three-dimensional geometry of the so-called texture map, or simply texture - a raster two-dimensional image. In this case, the effect is not on the vertices of the geometry, but the data of all pixels obtained during rasterization of the scene change. This technique can significantly increase the realism and detail of the final image.

OSG supports several texture attributes and texturing modes. But, before talking about textures, let's talk about how OSG deals with bitmap images. To work with raster images, a special class is provided - osg :: Image, which stores within itself the image data intended, in the end, for texturing an object.

1. Representation of raster image data. Class osg :: Image


The best way to load an image from disk is to use the osgDB :: readImageFile () call. It is very similar to the osg :: readNodeFile () call that has already caused us to sink teeth. If we have a bitmap named picture.bmp, then its download will look like this

osg::ref_ptr<osg::Image> image = osgDB::readImageFile("picture.bmp"); 

If the image is loaded correctly, then the pointer will be valid, otherwise the function will return NULL. After downloading, we can get information about the image using the following public methods.

  1. t (), s () and r () - return the width, height and depth of the image.
  2. data () - returns an unsigned char * pointer to raw image data. Through this pointer, the developer can directly affect the image data. You can get an idea of ​​the format of image data using the getPixalFormat () and getDataType () methods. The values ​​returned by them are equivalent to the parameters of the format and type of OpenGL functions glTexImage * (). For example, if a picture has the pixel format GL_RGB and the type is GL_UNSIGNED_BYTE, then three independent elements (unsigned bytes) are used to represent the RGB color component



You can create a new image object and allocate memory for it.

 osg::ref_ptr<osg::Image> image = new osg::Image; image->allocateImage(s, t, r, GL_RGB, GL_UNSIGNED_BYTE); unsigned char *ptr = image->data(); // Далее выполняем с буфером данных изображения любые операции 

Here s, t, r are the dimensions of the image; GL_RGB specifies the pixel format, and GL_UNSIGNED_BYTE sets the data type to describe a single color component. An internal data buffer of the required size is allocated in memory and is automatically destroyed if there are no references to this image.

The OSG plug-in system supports downloading of almost all popular image formats: * .jpg, * .bmp, * .png, * .tif, and so on. This list is easy to expand by writing your own plugin, but this is a topic for a separate conversation.

2. Basics of texturing


To apply a texture to a three-dimensional model, you must perform a number of steps:

  1. Define the vertex texture coordinates for a geometric object (in the environment of three-dimensional designers, this is called UV-scanning).
  2. Create a texture attribute object for 1D, 2D, 3D or cubic texture.
  3. Set one or more images for the texture attribute.
  4. Attach the texture attribute and mode to the set of states applied to the object being drawn.

OSG defines an osg :: Texture class that encapsulates all sorts of textures. Subclasses osg :: Texture1D, osg :: Texture2D, osg :: Texture3D and osg :: TextureCubeMap are inherited from it, which represent various texturing techniques adopted in OpenGL.

The most commonly used method of the class osg :: Texture is setImage (), which defines the image used in the texture, for example

 osg::ref_ptr<osg::Image> image = osgDB::readImageFile("picture.bmp"); osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; texture->setImage(image.get()); 

or, you can pass an image object directly to a texture class constructor.

 osg::ref_ptr<osg::Image> image = osgDB::readImageFile("picture.bmp"); osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D(image.get()); 

The image can be retrieved from the texture object by calling the getImage () method.

Another important point is to set texture coordinates for each vertex in the osg :: Geometry object. The transfer of these coordinates occurs through the osg :: Vec2Array and osg :: Vec3Array array by calling the setTexCoordArray () method.

After setting the texture coordinates, we need to set the texture slot number (unit), since OSG supports the imposition of several textures on the same geometry. When using one texture, the number of the unit is always 0. For example, the following code illustrates setting the texture coordinates for unit 0 of the geometry.

 osf::ref_ptr<osg::Vec2Array> texcoord = new osg::Vec2Array; texcoord->push_back( osg::Vec2(...) ); ... geom->setTexCoordArray(0, texcoord.get()); 

After that, we can add a texture attribute to the state set, automatically including the appropriate texturing mode (in our example, GL_TEXTURE_2D) and apply the attribute to the geometry or node that contains the geometry

 geom->getOrCreateStateSet()->setTextureAttributeAndModes(texture.get()); 

Please note that OpenGL manages image data in the graphics card's graphic memory, but the osg :: Image object along with the same data is located in the system memory. As a result, we will face the fact that we have two copies of the same data, taking up the memory of the process. If this image is not shared by several texture attributes, you can remove it from the system memory immediately after OpenGL transfers it to the video adapter memory. To enable this feature, the osg :: Texture class provides the appropriate method.

 texture->setUnRefImageDataAfterApply( true ); 

3. Load and apply 2D texture


The most commonly used 2D texturing technique is to overlay a two-dimensional image (or images) on the edge of a three-dimensional surface. Consider the simplest example of applying a single texture to a quadrilateral polygon.

Texture example
main.h

 #ifndef MAIN_H #define MAIN_H #include <osg/Texture2D> #include <osg/Geometry> #include <osgDB/ReadFile> #include <osgViewer/Viewer> #endif 

main.cpp

 #include "main.h" int main(int argc, char *argv[]) { (void) argc; (void) argv; osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array; vertices->push_back( osg::Vec3(-0.5f, 0.0f, -0.5f) ); vertices->push_back( osg::Vec3( 0.5f, 0.0f, -0.5f) ); vertices->push_back( osg::Vec3( 0.5f, 0.0f, 0.5f) ); vertices->push_back( osg::Vec3(-0.5f, 0.0f, 0.5f) ); osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array; normals->push_back( osg::Vec3(0.0f, -1.0f, 0.0f) ); osg::ref_ptr<osg::Vec2Array> texcoords = new osg::Vec2Array; texcoords->push_back( osg::Vec2(0.0f, 0.0f) ); texcoords->push_back( osg::Vec2(0.0f, 1.0f) ); texcoords->push_back( osg::Vec2(1.0f, 1.0f) ); texcoords->push_back( osg::Vec2(1.0f, 0.0f) ); osg::ref_ptr<osg::Geometry> quad = new osg::Geometry; quad->setVertexArray(vertices.get()); quad->setNormalArray(normals.get()); quad->setNormalBinding(osg::Geometry::BIND_OVERALL); quad->setTexCoordArray(0, texcoords.get()); quad->addPrimitiveSet( new osg::DrawArrays(GL_QUADS, 0, 4) ); osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; osg::ref_ptr<osg::Image> image = osgDB::readImageFile("../data/Images/lz.rgb"); texture->setImage(image.get()); osg::ref_ptr<osg::Geode> root = new osg::Geode; root->addDrawable(quad.get()); root->getOrCreateStateSet()->setTextureAttributeAndModes(0, texture.get()); osgViewer::Viewer viewer; viewer.setSceneData(root.get()); return viewer.run(); } 


Create an array of vertices and normals to the edge.

 osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array; vertices->push_back( osg::Vec3(-0.5f, 0.0f, -0.5f) ); vertices->push_back( osg::Vec3( 0.5f, 0.0f, -0.5f) ); vertices->push_back( osg::Vec3( 0.5f, 0.0f, 0.5f) ); vertices->push_back( osg::Vec3(-0.5f, 0.0f, 0.5f) ); osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array; normals->push_back( osg::Vec3(0.0f, -1.0f, 0.0f) ); 

Create an array of texture coordinates

 osg::ref_ptr<osg::Vec2Array> texcoords = new osg::Vec2Array; texcoords->push_back( osg::Vec2(0.0f, 0.0f) ); texcoords->push_back( osg::Vec2(0.0f, 1.0f) ); texcoords->push_back( osg::Vec2(1.0f, 1.0f) ); texcoords->push_back( osg::Vec2(1.0f, 0.0f) ); 

The point is that each vertex of the three-dimensional model corresponds to a point on the two-dimensional texture, and the coordinates of the point on the texture are relative - they are normalized to the actual width and height of the image. We want to stretch the entire loaded image onto the square, respectively, the corners of the square will correspond to the texture points (0, 0), (0, 1), (1, 1) and (1, 0). The order of the vertices in the array of vertices must be the same as the order of the texture vertices.

Next, create a square, assigning to the geometry an array of vertices and an array of normals

 osg::ref_ptr<osg::Geometry> quad = new osg::Geometry; quad->setVertexArray(vertices.get()); quad->setNormalArray(normals.get()); quad->setNormalBinding(osg::Geometry::BIND_OVERALL); quad->setTexCoordArray(0, texcoords.get()); quad->addPrimitiveSet( new osg::DrawArrays(GL_QUADS, 0, 4) ); 

Create a texture object and load the image used for it.

 osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; osg::ref_ptr<osg::Image> image = osgDB::readImageFile("../data/Images/lz.rgb"); texture->setImage(image.get()); 

Create the root node of the scene and put the geometry we created there.

 osg::ref_ptr<osg::Geode> root = new osg::Geode; root->addDrawable(quad.get()); 

and finally apply the texture attribute to the node where the geometry is placed

 root->getOrCreateStateSet()->setTextureAttributeAndModes(0, texture.get()); 



The osg :: Texture2D class determines whether the dimensions of a texture image are multiples of a power of two (for example, 64x64 or 256x512) automatically scaling images that do not fit in size, in fact, using the OpenScL function gluScaleImage (). There is a setResizeNonPowerOfTwoHint () method that determines whether or not to resize an image. Some video cards require multiplicity of image size of a power of two, while the class osg :: Texture2D supports work with an arbitrary texture size.

Something about the texture mapping mode.


As we have already said, the texture coordinates are normalized from 0 to 1. The point (0, 0) corresponds to the upper left corner of the image, and the point (1, 1) corresponds to the lower right corner. What happens if you set texture coordinates greater than one?

By default, in OpenGL, as in OSG, the texture will be repeated in the direction of the axis, the value of the texture coordinate will exceed unity. This technique is often used, for example, to create a model of a long brick wall, using a small texture, repeating its imposition many times in both width and height.

This behavior can be controlled through the setWrap () method of the osg :: Texture class. As the first parameter, the method takes the axis identifier to which the blending mode should be applied, transmitted as the second parameter, for example

 // Повторять текстуру по оси s texture->setWrap( osg::Texture::WRAP_S, osg::Texture::REPEAT ); // Повторять текстуру по оси r texture->setWrap( osg::Texture::WRAP_R, osg::Texture::REPEAT ); 

This code clearly indicates to the engine to repeat the texture along the axes s and r, if the values ​​of texture coordinates exceed 1. Full list by the texture mapping mode:

  1. REPEAT - repeat texture.
  2. MIRROR - repeat texture, reflecting mirror.
  3. CLAMP_TO_EDGE — Coordinates outside the 0 to 1 bind to the corresponding texture edge.
  4. CLAMP_TO_BORDER - coordinates that go beyond the limits from 0 to 1 will give the user-set border color.

4. Render to texture


The texture rendering technique allows the developer to create a texture based on some three-dimensional sub-stage or model and apply it to the surface on the main scene. This technology is often called "baking" texture.

For dynamic baking texture, you must perform three steps:

  1. Create a texture object for rendering into it.
  2. Render the scene to texture.
  3. Use the resulting texture as intended.

We need to create an empty texture object. OSG allows you to create an empty texture of a given size. The setTextureSize () method allows you to set the width and height of the texture, as well as the depth as an additional parameter (for 3D textures).

To render a texture to texture, attach it to a camera object by calling the attach () method, which takes a texture object as an argument. In addition, this method takes an argument indicating which part of the frame buffer should be rendered to this texture. For example, to transfer the color buffer to the texture, run the following code.

 camera->attach( osg::Camera::COLOR_BUFFER, texture.get() ); 

Other parts of the frame buffer available for rendering include the depth buffer DEPTH_BUFFER, the stencil buffer STENCIL_BUFFER, and additional color buffers from COLOR_BUFFER0 to COLOR_BUFFER15. The presence of additional color buffers and their number is determined by the video card model.

In addition, for the camera rendering the texture, the parameters of the projection and viewport matrix should be set, the size of which corresponds to the size of the texture. The texture will be updated in the process of drawing each frame. Please note that the main camera should not be used for rendering to the texture, since it provides the rendering of the main scene and you just get a black screen. This requirement may not be fulfilled only when you perform off-screen rendering.

5. An example of the implementation of rendering to texture


To demonstrate the rendering technique into texture, we will implement such a task: create a square, drag a square texture onto it, and render a animated scene into the texture, of course, with our favorite. The program that implements the example is quite voluminous. However, I still give her the full source code.

Texrender example
main.h

 #ifndef MAIN_H #define MAIN_H #include <osg/Camera> #include <osg/Texture2D> #include <osg/MatrixTransform> #include <osgDB/ReadFile> #include <osgGA/TrackballManipulator> #include <osgViewer/Viewer> #endif 

main.cpp

 #include "main.h" //------------------------------------------------------------------------------ // //------------------------------------------------------------------------------ osg::Geometry *createQuad(const osg::Vec3 &pos, float w, float h) { osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array; vertices->push_back( pos + osg::Vec3( w / 2, 0.0f, -h / 2) ); vertices->push_back( pos + osg::Vec3( w / 2, 0.0f, h / 2) ); vertices->push_back( pos + osg::Vec3(-w / 2, 0.0f, h / 2) ); vertices->push_back( pos + osg::Vec3(-w / 2, 0.0f, -h / 2) ); osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array; normals->push_back(osg::Vec3(0.0f, -1.0f, 0.0f)); osg::ref_ptr<osg::Vec2Array> texcoords = new osg::Vec2Array; texcoords->push_back( osg::Vec2(1.0f, 1.0f) ); texcoords->push_back( osg::Vec2(1.0f, 0.0f) ); texcoords->push_back( osg::Vec2(0.0f, 0.0f) ); texcoords->push_back( osg::Vec2(0.0f, 1.0f) ); osg::ref_ptr<osg::Geometry> quad = new osg::Geometry; quad->setVertexArray(vertices.get()); quad->setNormalArray(normals.get()); quad->setNormalBinding(osg::Geometry::BIND_OVERALL); quad->setTexCoordArray(0, texcoords.get()); quad->addPrimitiveSet(new osg::DrawArrays(GL_QUADS, 0, 4)); return quad.release(); } //------------------------------------------------------------------------------ // //------------------------------------------------------------------------------ int main(int argc, char *argv[]) { (void) argc; (void) argv; osg::ref_ptr<osg::Node> sub_model = osgDB::readNodeFile("../data/cessna.osg"); osg::ref_ptr<osg::MatrixTransform> transform1 = new osg::MatrixTransform; transform1->setMatrix(osg::Matrix::rotate(0.0, osg::Vec3(0.0f, 0.0f, 1.0f))); transform1->addChild(sub_model.get()); osg::ref_ptr<osg::Geode> model = new osg::Geode; model->addChild(createQuad(osg::Vec3(0.0f, 0.0f, 0.0f), 2.0f, 2.0f)); int tex_widht = 1024; int tex_height = 1024; osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; texture->setTextureSize(tex_widht, tex_height); texture->setInternalFormat(GL_RGBA); texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR); texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR); model->getOrCreateStateSet()->setTextureAttributeAndModes(0, texture.get()); osg::ref_ptr<osg::Camera> camera = new osg::Camera; camera->setViewport(0, 0, tex_widht, tex_height); camera->setClearColor(osg::Vec4(1.0f, 1.0f, 1.0f, 1.0f)); camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); camera->setRenderOrder(osg::Camera::PRE_RENDER); camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT); camera->attach(osg::Camera::COLOR_BUFFER, texture.get()); camera->setReferenceFrame(osg::Camera::ABSOLUTE_RF); camera->addChild(transform1.get()); osg::ref_ptr<osg::Group> root = new osg::Group; root->addChild(model.get()); root->addChild(camera.get()); osgViewer::Viewer viewer; viewer.setSceneData(root.get()); viewer.setCameraManipulator(new osgGA::TrackballManipulator); viewer.setUpViewOnSingleScreen(0); camera->setProjectionMatrixAsPerspective(30.0, static_cast<double>(tex_widht) / static_cast<double>(tex_height), 0.1, 1000.0); float dist = 100.0f; float alpha = 10.0f * 3.14f / 180.0f; osg::Vec3 eye(0.0f, -dist * cosf(alpha), dist * sinf(alpha)); osg::Vec3 center(0.0f, 0.0f, 0.0f); osg::Vec3 up(0.0f, 0.0f, -1.0f); camera->setViewMatrixAsLookAt(eye, center, up); float phi = 0.0f; float delta = -0.01f; while (!viewer.done()) { transform1->setMatrix(osg::Matrix::rotate(static_cast<double>(phi), osg::Vec3(0.0f, 0.0f, 1.0f))); viewer.frame(); phi += delta; } return 0; } 


To create a square, we write a separate free function.

 osg::Geometry *createQuad(const osg::Vec3 &pos, float w, float h) { osg::ref_ptr<osg::Vec3Array> vertices = new osg::Vec3Array; vertices->push_back( pos + osg::Vec3( w / 2, 0.0f, -h / 2) ); vertices->push_back( pos + osg::Vec3( w / 2, 0.0f, h / 2) ); vertices->push_back( pos + osg::Vec3(-w / 2, 0.0f, h / 2) ); vertices->push_back( pos + osg::Vec3(-w / 2, 0.0f, -h / 2) ); osg::ref_ptr<osg::Vec3Array> normals = new osg::Vec3Array; normals->push_back(osg::Vec3(0.0f, -1.0f, 0.0f)); osg::ref_ptr<osg::Vec2Array> texcoords = new osg::Vec2Array; texcoords->push_back( osg::Vec2(1.0f, 1.0f) ); texcoords->push_back( osg::Vec2(1.0f, 0.0f) ); texcoords->push_back( osg::Vec2(0.0f, 0.0f) ); texcoords->push_back( osg::Vec2(0.0f, 1.0f) ); osg::ref_ptr<osg::Geometry> quad = new osg::Geometry; quad->setVertexArray(vertices.get()); quad->setNormalArray(normals.get()); quad->setNormalBinding(osg::Geometry::BIND_OVERALL); quad->setTexCoordArray(0, texcoords.get()); quad->addPrimitiveSet(new osg::DrawArrays(GL_QUADS, 0, 4)); return quad.release(); } 

The function takes as input the position of the center of the square and its geometric dimensions. Next, an array of vertices, an array of normals and texture coordinates is created, after which the created geometry is returned from the function.

In the body of the main program we will load the model of Cessna

 osg::ref_ptr<osg::Node> sub_model = osgDB::readNodeFile("../data/cessna.osg"); 

In order to animate this model, create and initialize the transformation of rotation around the Z axis.

 osg::ref_ptr<osg::MatrixTransform> transform1 = new osg::MatrixTransform; transform1->setMatrix(osg::Matrix::rotate(0.0, osg::Vec3(0.0f, 0.0f, 1.0f))); transform1->addChild(sub_model.get()); 

Now create a model for the main scene - the square on which we will render

 osg::ref_ptr<osg::Geode> model = new osg::Geode; model->addChild(createQuad(osg::Vec3(0.0f, 0.0f, 0.0f), 2.0f, 2.0f)); 

Create an empty texture for a 1024x1024 pixel square with an RGBA pixel format (32-bit three-component color with alpha channel)

 int tex_widht = 1024; int tex_height = 1024; osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; texture->setTextureSize(tex_widht, tex_height); texture->setInternalFormat(GL_RGBA); texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR); texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR); 

Apply this texture to the square model.

 model->getOrCreateStateSet()->setTextureAttributeAndModes(0, texture.get()); 

Then create a camera that will bake the texture.

 osg::ref_ptr<osg::Camera> camera = new osg::Camera; camera->setViewport(0, 0, tex_widht, tex_height); camera->setClearColor(osg::Vec4(1.0f, 1.0f, 1.0f, 1.0f)); camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); 

Viewport camera in size coincides with the size of the texture. In addition, do not forget to set the background color when cleaning the screen and the cleaning mask, indicating to clear both the color buffer and the depth buffer. Next, set up the camera to render into texture

 camera->setRenderOrder(osg::Camera::PRE_RENDER); camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT); camera->attach(osg::Camera::COLOR_BUFFER, texture.get()); 

The rendering order PRE_RENDER indicates that this camera is rendered before rendering to the main scene. We specify FBO as the target of the render and attach our texture to the camera. Now we tune the camera to work in the absolute coordinate system, and as a scene we set our subtree, which we want to render into the texture: transformation of the rotation with the model of Cessna attached to it

 camera->setReferenceFrame(osg::Camera::ABSOLUTE_RF); camera->addChild(transform1.get()); 

Create a root group node, adding to it the main model (square) and camera processing texture

 osg::ref_ptr<osg::Group> root = new osg::Group; root->addChild(model.get()); root->addChild(camera.get()); 

Create and customize viewer

 osgViewer::Viewer viewer; viewer.setSceneData(root.get()); viewer.setCameraManipulator(new osgGA::TrackballManipulator); viewer.setUpViewOnSingleScreen(0); 

Configuring the projection matrix for the camera - a perspective projection through the parameters of the clipping pyramid

 camera->setProjectionMatrixAsPerspective(30.0, static_cast<double>(tex_widht) / static_cast<double>(tex_height), 0.1, 1000.0); 

Adjust the view matrix, which sets the position of the camera in space with respect to the origin of the sub-price from

 float dist = 100.0f; float alpha = 10.0f * 3.14f / 180.0f; osg::Vec3 eye(0.0f, -dist * cosf(alpha), dist * sinf(alpha)); osg::Vec3 center(0.0f, 0.0f, 0.0f); osg::Vec3 up(0.0f, 0.0f, -1.0f); camera->setViewMatrixAsLookAt(eye, center, up); 

Finally, we animate and display the scene by changing the angle of the plane's rotation around the Z axis on each frame.

 float phi = 0.0f; float delta = -0.01f; while (!viewer.done()) { transform1->setMatrix(osg::Matrix::rotate(static_cast<double>(phi), osg::Vec3(0.0f, 0.0f, 1.0f))); viewer.frame(); phi += delta; } 

As a result, we get a rather interesting picture.



In this example, we implemented some scene animation, but remember that expanding the run () loop and changing the rendering parameters before or after rendering a frame is not a safe task from the point of view of organizing access to data from different streams. Since OSG uses multi-threaded rendering, there are regular mechanisms for embedding your own actions in the rendering process, providing thread-safe access to data.

6. Saving the result of rendering to a file


OSG supports the ability to attach an osg :: Image object to the camera and save the contents of the frame buffer to the image data buffer. After that, it is possible to save this data to disk using the osg :: writeImageFile () function

 osg::ref_ptr<osg::Image> image = new osg::Image; image->allocateImage( width, height, 1, GL_RGBA, GL_UNSIGNED_BYTE ); camera->attach( osg::Camera::COLOR_BUFFER, image.get() ); ... osgDB::writeImageFile( *image, "saved_image.bmp" ); 

Conclusion


Perhaps the material described in the article will seem trivial. However, it outlines the very basics of working with textures in OpenSceneGraph, which are based on more complex techniques for working with this engine, which we will definitely talk about in the future.

To be continued...

Source: https://habr.com/ru/post/437624/