Let's separate the entities first. In the process involved:
- Battle server.
- Test server.
- Git server
- Some "deployment agent". Now you are deploying (deploying) with your hands, it would be nice to delegate this task to a continuous integration server (CI server). In the simplest version, this is implemented by hooks on the git server.
- Application source code
- Distribution of the application - in your case these are files minimized with the help of Gulp.
At the moment, the tasks of all servers are performed by one, and the distribution is stored with the code. Let us nevertheless treat them as separate and strive for low connectivity between them (as in the PLO between classes). This will give our system flexibility and much greater scalability. In particular, we will not rely on the fact that something lies side by side on the same disk or is accessible over a local network.
What should be done:
- Separate the distribution from the code. When we commit, we run gulp, we get minimized files, (optionally) we pack them into the archive, move them to the server and deploy them there. Minimized code does not need to be stored, because we can always get it from the source.
- Implement the deployment scenarios in the code . At least provide a place for the actions performed on the server: filling configs, restarting services, etc. As mentioned above, this is done via git-hook and / or the CI server.
- Automate the response to changes in the repository: when pushing to certain branches, it should be deployed in a specific environment. Since the deployment script is now in our code, we can.
And in the organization of work:
- Revise the order of releases. Now you seem to release every typo and everyone can release. It is also worth thinking about the protection of the master branch (including from its careless actions).
Working example
I propose to conduct an experiment on the example of GitLab.com (ie, we will use the cloud service). The choice is subjective, here are the arguments for:
- There is an opportunity to create free private repositories. (Ie your commercial code is in relative safety.)
- Included is a GitLab CI, tightly integrated with the repository.
- Free runners are available (including private repositories) - conditionally, virtual machines on which we will perform our tasks.
The argument against: if the project grows, someday the free license will not be enough for all tasks.
A working example with pipelines and logs is posted on GitLab.com .
The main file describing the script being executed is in the root folder of the project and is called .gitlab-ci.yml .
image: alpine stages: - gulp - test - package - deploy cache: paths: - node_modules/ gulp: stage: gulp image: node:latest before_script: - npm install script: node_modules/.bin/gulp compress artifacts: paths: - dist expire_in: 10 minutes test: stage: test script: ls dist script: echo 'run tests here' package: stage: package script: tar -czvf packaged.tar.gz dist artifacts: paths: - packaged.tar.gz deploy_prod: stage: deploy script: echo 'deploy on master' only: - master deploy_test: stage: deploy script: echo 'deploy on develop' only: - develop
We will place the dependencies in the config for npm - package.json . The gulpfile.js file describes the compress task that minimizes.
Screenshot of the CI pipeline window:

Result of work: archive packaged.tar.gz , in it dists/hello.js , in it
!function(t,e,i){t.title="New title"}(window,document);