Why it happens?
Non-transparent (and not only, but this is another topic) shaders before the "main" drawing write the depth of the pixel to the buffer, oddly enough, the depth - depth buffer, also called z-buffer. The logic is quite simple: objects are drawn from the closest to the farthest - this approach will allow to cut off invisible pixels when crossing any geometry in the scene.
Before processing the next fragment (pixel) into a specific buffer, the GPU compares the depth value of this fragment with the z-buffer, to decide whether it is necessary to process this pixel at all.
All this background is needed in order to understand why translucent objects are a headache when rendering. For example, let's take a regular cube with translucent material and two-sided drawing and see what potential problem occurs in translucent objects. To make the drawing simpler, let's say that this particular cube consists of quadrangles, not triangles.
For clarity, I demonstrated the order in which the GPU, for example , draws polygons (in general, it depends on how the polygons are indexed into the index buffer) , and immediately below I drew what we expect to see from the GPU: 
Even without drawing the last 2 faces, it is clear that something went wrong: for some reason, the pixels of the “later” in terms of drawing polygons are written over the already rendered pixels, even though they had to initially fail ZTest - the front polygon obviously closer than all the other polygons of our cube.
Let's go back to opaque objects. Why does the depth buffer work? Everything is simple, because of the front-to-back rendering system (from near objects to far ones), the values ​​in the depth buffer can only change to more “close” ones (depending on the buffer device, sometimes the closest pixel is marked 1, sometimes 0).
In Transparent objects, the opposite is true:
They are drawn on the back-to-front system , here it is worth making a reservation: this is not happening everywhere, this approach is currently used in Unity at the moment, with the addition of Scriptable Render Pipeline, it is possible to rewrite the render as you please, including front-to-back transparent rendering, more about it here . What is it for? If you place 2 translucent objects with, for example, alpha = 0.5, then you expect that the first object will first blend with an opaque background, then the updated background and the second object will blend.
This approach requires not to update the information in the depth buffer , again, why? There are 2 reasons for this: the first is that we are still checking our pixel for depth, so as not to draw the overlapped pixel once again, and the second is that if we start updating the depth buffer, artifacts will arise when two semi-transparent objects overlap with each other:
In this case, I put one translucent object (helmet) in another (wall) so that the helmet was on both sides of the wall. Without updating the depth buffer, you can see that both objects are normally mixed with each other, but it is necessary to enable updating the depth buffer, as we immediately see that half of the helmet "behind the wall" is not drawn.
PS
A helmet with ZWrite Off looks as if it is opaque - this is due to the fact that the color of the material is white near the wall and the helmet;
The solution to this problem
First you need to understand that there is no panacea for this. The problem is that the pixels of translucent objects do not like to fight for the right to be drawn - so they do not do that at all. Well, it would be different objects - everything is simply resolved by sorting translucent objects by depth, but when these pixels belong to the same object, the rather difficult problem to solve appears, as described above.
And now, after a mountain of theory, we got to the ways to solve this problem:
Update depth buffer or what is ZWrite On
I do not recommend writing radically to the buffer, ideally you just need to write a shader variant (you need to write your shader from scratch), or make 2 different shaders in the Shader Graph, one with ZWrite On with alpha = [1..0.5) and ZWrite Off with alpha = [0.5..0]. Altogether, ideally, you can slightly rewrite Lightweight Render Pipeline, checking whether semi-transparent objects intersect when they are sorted. In general, there are a lot of options with this approach.
The cheapest way is a single shader with ZWrite On , but it draws only the "outer shell" of the object, if there is something inside - you need to combine it with ZWrite Off for different alpha values: 
How to do this in Shader Graph?
Unfortunately, the Shader Graph , unlike its counterpart - the Amplify Shader Editor , does not make it so easy to change ZWrite and other advantages of sab-shaders, well ... then it will be beta, someday they will add :)
And we are not disdaining to create a new shader file, anyone will do, if only not Graph - we do not need this stupid editor, for example - Standard Surface Shader . We call it what we want, open it, delete all the content, save it and leave it alone for now.
Create any shader we need in the Shader Graph , set Surface - Transparent . We created, tested, everything works - right-click on the Master Node and select Show Generated Code . We are waiting for the selected code editor to load, see our shader, select it and copy it. Next, open our first shader file, from which we have deleted everything, and paste the copied code.
Further through Ctrl + F we find ZWrite Off and change it to ZWrite On . Be careful, this needs to be replaced in the Forward pass of the Sabshieder, this is important : 
And it is important, because there may be several passes from a sabscheider, and each of them will contain a ZWrite parameter: 
PS
In general, there are several other options, for example, - Cutout shader and a constant change in the indexing of the vertices for different positions and rotation of the camera. The first option is not suitable, because you need a semi- transparent object, and the second is not suitable for two reasons: firstly, it is too difficult to implement without at least some knowledge in the field of GPU and the rendering pipeline, secondly, this technique is clearly not suitable for runtime computing.