I am currently making a JavaFX application capable of displaying a 3d model (.obj files) on the screen, for this I use jimObjModelImporterJFX . And in another window you need to get the visible contour of this model, i.e. those that a person perceives as edges.

I do not quite understand how to do this, because in Mesh I can get Points () in which the array float is stored in the sequence x, y, z. And I do not quite understand how to select only visible points / edges from there.

What is now implemented in the program: What programs are doing now What I need to get: What you need to get

As I understand it, in the second window you need to implement your rendering model and apply the following rules: 1. The edge should not be covered by other polygons. 2. The normals to its adjacent faces have an angle greater than K radians. 3. The scalar products of normals to faces with the Z axis of the camera are opposite in signs. But I do not understand how to implement these items.

Or, as I saw it, from the model that is displayed on the first screen, get the points of the visible contour and draw this contour on the other window itself

Application code:

 import com.interactivemesh.jfx.importer.ImportException; import com.interactivemesh.jfx.importer.obj.ObjModelImporter; import javafx.application.Application; import javafx.geometry.Point3D; import javafx.scene.*; import javafx.scene.input.KeyCode; import javafx.scene.paint.Color; import javafx.scene.shape.DrawMode; import javafx.scene.shape.MeshView; import javafx.scene.transform.Rotate; import javafx.stage.Stage; import java.net.URL; import java.util.Arrays; public class OutputModel extends Application { MeshView[] one; MeshView[] second; private double rotateX = 0; private double rotateY = 0; double cameraDistance = 450; double cameraX = 100; double cameraY = -300; private MeshView[] get3dModel() { ObjModelImporter objImporter = new ObjModelImporter(); try { URL modelUrl = this.getClass().getResource("/teapot.obj"); objImporter.read(modelUrl); } catch (ImportException e) { // handle exception } return objImporter.getImport(); } public void start(Stage primaryStage) throws Exception { one = get3dModel(); Arrays.stream(one).forEach(it -> { it.setTranslateX(450); it.setTranslateY(-30); it.setTranslateZ(450); it.setScaleX(200.0); it.setScaleY(200.0); it.setScaleZ(200.0); }); second = get3dModel(); Arrays.stream(second).forEach(it -> { it.setTranslateX(450); it.setTranslateY(-30); it.setTranslateZ(450); it.setScaleX(200.0); it.setScaleY(200.0); it.setScaleZ(200.0); it.setDrawMode(DrawMode.LINE); }); Stage secondStage = new Stage(); Rotate rxBox = new Rotate(0, Rotate.X_AXIS); Rotate ryBox = new Rotate(0, Rotate.Y_AXIS); Rotate rzBox = new Rotate(0, Rotate.Z_AXIS); // Add the Shapes and the Light to the Group Group rootTwo = new Group(second); rootTwo.setDepthTest(DepthTest.ENABLE); rootTwo.getTransforms().addAll(rxBox, ryBox, rzBox); // Create a Scene with depth buffer enabled Scene sceneSecond = new Scene(rootTwo, 400, 300, true, SceneAntialiasing.BALANCED); // Create a Camera to view the 3D Shapes PerspectiveCamera camera = new PerspectiveCamera(false); camera.setTranslateX(cameraX); camera.setTranslateY(cameraY); camera.setNearClip(0.1); camera.setFarClip(10000.0); camera.setTranslateZ(cameraDistance); PerspectiveCamera cameraTwo = new PerspectiveCamera(false); cameraTwo.setTranslateX(cameraX); cameraTwo.setTranslateY(cameraY); cameraTwo.setNearClip(0.1); cameraTwo.setFarClip(10000.0); cameraTwo.setTranslateZ(cameraDistance); // Add the Camera to the Scene sceneSecond.setCamera(cameraTwo); sceneSecond.setFill(Color.BLACK); // Add the Scene to the Stage secondStage.setScene(sceneSecond); // Set the Title of the Stage secondStage.setTitle("OutputWireframe"); // Display the Stage secondStage.show(); // Add the Shapes and the Light to the Group Group root = new Group(one); root.setDepthTest(DepthTest.ENABLE); root.getTransforms().addAll(rxBox, ryBox, rzBox); // Create a Scene with depth buffer enabled Scene scene = new Scene(root, 400, 300, true, SceneAntialiasing.BALANCED); Rotate rotationX = new Rotate(0, Rotate.X_AXIS); Rotate rotationY = new Rotate(0, Rotate.Y_AXIS); scene.setFill(Color.BLACK); root.getTransforms().addAll(rotationX, rotationY); scene.setOnKeyPressed(event -> { if (event.getCode().equals(KeyCode.UP)) { rotateY += 5; root.setRotationAxis(Rotate.Y_AXIS); root.setRotate(rotateY); rootTwo.setRotationAxis(Rotate.Y_AXIS); rootTwo.setRotate(rotateY); } if (event.getCode().equals(KeyCode.DOWN)) { rotateY -= 5; root.setRotationAxis(Rotate.Y_AXIS); root.setRotate(rotateY); rootTwo.setRotationAxis(Rotate.Y_AXIS); rootTwo.setRotate(rotateY); } if (event.getCode().equals(KeyCode.LEFT)) { rotateX -= 5; root.setRotationAxis(Rotate.X_AXIS); root.setRotate(rotateX); rootTwo.setRotationAxis(Rotate.X_AXIS); rootTwo.setRotate(rotateX); } if (event.getCode().equals(KeyCode.RIGHT)) { rotateX += 5; root.setRotationAxis(Rotate.X_AXIS); root.setRotate(rotateX); rootTwo.setRotationAxis(Rotate.X_AXIS); rootTwo.setRotate(rotateX); } if (event.getCode().equals(KeyCode.EQUALS)) { cameraDistance += 10; camera.setTranslateZ(cameraDistance); cameraTwo.setTranslateZ(cameraDistance); } if (event.getCode().equals(KeyCode.MINUS)) { cameraDistance -= 10; camera.setTranslateZ(cameraDistance); cameraTwo.setTranslateZ(cameraDistance); } if (event.getCode().equals(KeyCode.W)) { cameraY += 5; camera.setTranslateY(cameraY); cameraTwo.setTranslateY(cameraY); } if (event.getCode().equals(KeyCode.S)) { cameraY -= 5; camera.setTranslateY(cameraY); cameraTwo.setTranslateY(cameraY); } if (event.getCode().equals(KeyCode.A)) { cameraX -= 5; camera.setTranslateX(cameraX); cameraTwo.setTranslateX(cameraX); } if (event.getCode().equals(KeyCode.D)) { cameraX += 5; camera.setTranslateX(cameraX); cameraTwo.setTranslateX(cameraX); } sendMeshViews(cameraTwo); }); // Add the Camera to the Scene scene.setCamera(camera); // Add the Scene to the Stage primaryStage.setScene(scene); // Set the Title of the Stage primaryStage.setTitle("OutputModel"); // Display the Stage primaryStage.show(); } private void sendMeshViews(PerspectiveCamera camera) { Point3D oz = new Point3D(camera.getTranslateX(), camera.getTranslateY(), camera.getTranslateZ()).normalize(); Arrays.stream(second).parallel().forEach(it -> { }); // TODO: 11.01.2017 написать обработку, чтобы передавать лишь видимые грани } public static void main(String[] args) { launch(args); } } 
  • The contour is the same width as the Mesh line of the model. Rather, the middle of the edge of the model, if I understand you correctly. - Alexesy Ivashin
  • As I understand it, in the second window you need to implement your rendering model and apply the following rules: 1. The edge should not be covered by other polygons. 2. The normals to its adjacent faces have an angle greater than K radians. 3. The scalar products of normals to faces with the Z axis of the camera are opposite in signs. But I do not understand how to implement these items. - Alexesy Ivashin
  • @Kromster, I don’t know whether they will accept this, as I understood the ghost rules are not a recommendation, but a prerequisite - Alexesy Ivashin
  • Specify the binding "rules" in question. If they are exactly like this, then your picture with an example is incorrect (it is likely that the contour will also be drawn in places of acute angles, for example, the junction of the nose and body of the kettle). - Kromster
  • one
    @Kromster Yes, you are right. Corrected the description of the question. Added a picture so that no one should be confused - Alexesy Ivashin

2 answers 2

Try to do as Blender does. First print the model from a grid of thick lines. Overlay the mesh over the model. Since the grid lines are thick, the model will close everything except the contour. If you only need a contour - impose a model painted black

  • I advise you to create your own question (about drawing the external contour of the 3D model) and place this answer in it. Unfortunately , the answer here is no longer appropriate and can be zapped off / deleted. - Kromster

With the current conditions, you need to look towards a two-pass render:

  1. Prepare a frame buffer the size of the rendering window and attach a texture to it.
  2. Render to the buffer / texture of the scene normal with the first pass (something like color.rgb = normal.xyz ) (with the depth test turned on).
  3. With the second pass, render this texture 1in1 and calculate the presence of an edge along the normals (see the normals of the neighboring pixels, which are 1.0 / ширина_текстуры )