I would like to optionally install dependencies for my deployments based on environment configuration of my node servers, is this possible?
Something like :
dependencies : env.prod == true ? {} : {something : "1.1" }
Are there any variables available at this stage in the npm install stage?
Many organizations run their own node repositories. This has the advantage that your production won't fail to deploy because some third party repository went down.
If you can run your own node repository, why not run two (or more). Then you can put mock packages in the "staging" repository and the real packages in the "production" repository.
It seems that this would achieve the effect you are trying for, but I fail to see the point.
Related
I am setting up github actions, and struggling to see how to handle the dependencies of a particular action.
Below is a simplified view of my repository setup, with the action and workflow YAML files as I currently understand them.
my-org/my-tool#master: a specific tool in my organisation that I want to enable as a github action
package.json
tool.js: the tool, depending on npm packages described in package.json
action.yml
name: my-tool
inputs:
file:
description: file to be processed by my-tool
required: true
runs:
using: node16
main: tool.js
my-org/my-code#branch: a code repository where I want to run the tool as an action
my-code.js: some code that needs to be processed by the tool
.github/workflows/use-tool.yml:
# ...
jobs:
tool:
steps:
- uses: my-org/my-tool#master
with:
file: my-code.js
# ...
If my-tool has no dependencies, this setup works well enough. But my-tool needs to have dependencies installed to function properly, i.e., a single npm install in its root directory. How do I specify this dependency in a github action setup?
I have multiple options, none of which are truly satisfying to me:
I can define a workflow that checks out my-tool, installs the dependencies, and then runs it. However, I don't want that logic to be in the workflow yaml of the my-code repository, as I have multiple similar repositories, all of which require my-tool, which will result in duplication. I don't see how to do this in the action.yml file itself. It seems I have to choose between a javascript action to run node tools, or a composite action to run shell tools.
I can bundle the node_modules directory in the my-tool repository; this bloats my repos unnecessarily.
I can bundle an ncc distribution of my-tool in its repo; same problem.
I can define and build a container image of my-tool with its dependencies installed, and use a container action to run the tool on my-code. This seems like overhead to me, and I have no idea how this container would access the files from the my-org/my-code repo (which would be its core purpose).
Options 2 and 3 are the ones described in the example published by github. I would be happy with using an artifact (either node_modules dir or ncc distribution) as the basis for an action, but there seems to be no way to do this.
My specific use case concerns npm dependencies, but there seems to be a generic use case for "action with dependencies", support for which is unclear to me from reading the documentation. I can't be the only dev in the world working with a toolset with dependencies of its own. Maybe I am looking at this setup in the wrong way. Is there a best practice for actions with dependencies? Is there a best practice to only have actions with zero dependencies?
I look forward to learn from any partial answer or suggestion.
Editor's note: The question's original title was "Use npm install to install node modules stored on a local directory", which made the desire to transparently redefine the installation source less obvious. Therefore, some existing answers suggest solutions based on modifying the installation process.
I know this is a simple thing, but I'm quite new to anything in this area so after searching around and constantly finding answers that weren't really what I want I figured I'd just ask directly.
I currently have a process that is run in directory FOO that calls npm install. Directory FOO contains a package.json and a npm-shrinkwrap.json file to specify the modules (bluebird, extend, and mysql in this case but it doesn't really matter) and versions. This all works perfectly fine.
But now instead of reaching out to the internet to get the modules I want to have them stored in local directory BAR and have the process in foo use npm to install them from there. I can not store them permanently in FOO but I can in BAR for reasons outside my control. I know this is relatively simple but I can't seem to get the right set of commands down. Thanks for the help.
Note: This answer originally suggested only redefining the cache location. While that works in principle, npm still tries to contact the network for each package, causing excessive delays.
I assume your intent is to transparently change the installation source: in other words: you don't want to change your package, you simply want to call npm install as before, but have the packages be installed from your custom filesystem location, offline (without the need for an Internet connection).
There are two pieces to the puzzle:
Redefine npm's cache filesystem location (where previously downloaded packages are cached) to point to your custom location:
Note that cached packages are stored in a specific way: the package.json file is stored in subfolder package, and the zipped package as a whole as package.tgz. It's easiest to copy packages from your existing cache to your custom location, or to simply install additionally needed ones while you have an Internet connection, which caches them automatically.
For transparent use (npm install can be called as usual):
By setting the configuration item globally:
npm config set cache '/path/to/BAR'
Note that this will take effect for all npm operations, persistently.
Via an environment variable (which can be scoped to a script or even a single command):
export npm_config_cache='/path/to/BAR'
npm_config_cache='path/to/BAR' npm install
Ad-hoc use, via a command-line option:
npm install --cache /path/to/BAR
Force npm to use cached packages:
Currently, that requires a workaround via the cache-min configuration item.
A more direct feature, such as via an --offline switch has been a feature request for years - see https://github.com/npm/npm/issues/2568
The trick is to set cache-min to a very high value, so that all packages in the cache are considered fresh and served from there:
For transparent use (npm install can be called as usual):
By setting the configuration item globally:
npm config set cache-min 9999999999
Note that this will take effect for all npm operations, persistently.
Via an environment variable (which can be scoped to a script or even a single command):
export npm_config_cache_min=9999999999
npm_config_cache_min=9999999999 npm install
Ad-hoc use, via a command-line option:
npm install --cache-min 9999999999
Assuming you've set cache-min globally or through an environment variable,
running npm install should now serve the packages straight from your custom cache location.
Caveats:
This assumes that all packages your npm install needs are available in your custom location; trying to install a package that isn't in the cache will obviously fail without an Internet connection.
Conversely, if you do have Internet access but want to prevent npm from using it to fetch packages - which it still will attempt if a package is not found in the cache - you must change the registry configuration item to something invalid so as to force the online installation attempt to fail; e.g.:
export npm_config_registry=http://example.org
Note that the URL must exist to avoid delays while npm tries to connect to it; while you could set the value to something syntactically invalid (e.g., none), npm will then issue a warning on every use.
Sample bash script:
#!/usr/bin/env bash
# Set environment variables that set npm configuration items to:
# - redefine the location of the cache folder
# - make npm look in the cache only (assuming the packages are there)
# Note that by doing this inside a script the changes will only take effect
# in the script and NOT persist.
export npm_config_cache='/path/to/BAR' npm_config_cache_min=9999999999
# Now cd to your package and invoke `npm install` as usual.
cd '/path/to/project'
npm install
You might want to try npm link. You could:
download the dependency
run npm link from the dependency's directory
run npm link mycrazydependency from you project
Detail here: https://docs.npmjs.com/cli/link
If a shrink wrap file is present then package.json is ignored. What you need to do is change the URL they are being retrieved from using a find and replace operation like sed .... However I'm not sure changing the URL to a file:/// syntax is valid but give it a go.
I'v made a static single page site using grunt. I'm now trying to deploy it to heroku using the heroku-buildpack-nodejs-grunt for node grunt.
Below is a pic of my root directory:
Here's my Gruntfile package.json:
Procfile:
web: node index.html
When I run $ git push heroku master it gets to the Gruntfile and fails:
-----> Found Gruntfile, running grunt heroku:production task
>> Local Npm module "grunt-contrib-uglify" not found. Is it installed?
The above errors proceed to list all local NPM modules as not found. If I list all loadNpmTasks instead of using "load-grunt-tasks", I get the exact same error.
When I $ heroku logs I get:
Starting process with command `node web.js`
Error: Cannot find module '/app/web.js'
Can anyone see where I've gone wrong?
For anyone passing by here, I wasn't able to solve the problem. This is where I got to:
In my Gruntfile, I moved npm modules from devDependencies to dependencies. Heroku was then able to install these dependencies.
However, when Heroku ran the tasks, it stops at the haml task w/ error "You need to have Ruby and Haml installed and in your PATH for this task to work". Adding ruby & haml to the Gruntfile as engines did not work.
The only thing I can think of is that maybe Heroku installs your devDependencies first, tries to run Grunt, but since it didn't install load-grunt-tasks yet, you don't get the grunt.loadNpmTasks( 'grunt-contrib-uglify' ); line (which load-grunt-tasks does for you), and thus Grunt can't find the package.
Can you try changing your Gruntfile to explicitly list out all npm modules using the grunt.loadNpmTasks() method?
EDIT:
Just remembered another thing I had to do:
heroku labs:enable user-env-compile -a myapp
heroku config:set NODE_ENV=production
(Obviously replacing myapp with your Heroku app name.)
This makes Heroku allow user set environment variables and then sets your server to production. Try that, and set your dependencies and devDependencies as you had them originally (just to see if it works).
I am coming pretty late to the game here but I have used a couple methods and thought I would share.
Option 1: Get Heroku to Build
This is not my favorite method because it can take a long time but here it is anyway.
Heroku runs npm install --production when it receives your pushed changes. This only installs the production dependencies.
You don't have to change your environment variables to install your dev dependencies. npm install has a --dev switch to allow you to do that.
npm install --dev
Heroku provides an article on how you can customize your build. Essentially, you can run the above command as a postinstall script in your package.json.
"scripts": {
"start": "node index.js",
"postinstall": "npm install --dev && grunt build"
}
I think this is cleaner than putting dev dependencies in my production section or changing the environment variables back and forth to get my dependencies to build.
Also, I don't use a Procfile. Heroku can run your application by calling npm start (at least it can now almost two years after the OP). So as long as you provide that script (as seen above) Heroku should be able to start your app.
As far as your ruby dependency, I haven't attempted to install a ruby gem in my node apps on Heroku but this SO answer suggests that you use multi buildpack.
Option 2: Deploy Your Dependencies
Some argue that having Heroku build your application is bad form. They suggest that you should push up all of your dependencies. If you are like me and hate the idea of checking in your node_modules directory then you could create a new branch where you force add the node_modules directory and then deploy that branch. In git this looks like:
git checkout -b deploy
git add -f node_modules/
git commit -m "heroku deploy"
git push heroku --force deploy:master
git checkout master
git branch -D deploy
You could obviously make this into a script so that you don't have to type that every time.
Option 3: Do It All Yourself
This is my new favorite way to deploy. Heroku has added support for slug deploys. The previous link is a good read and I highly recommend it. I do this in my automated build from Travis-CI. I have some custom scripts to tar my app and push the slug to Heroku and its fast.
I faced a similar problem with Heroku not installing all of my dependencies, while I had no issue locally. I fixed it by running
heroku config:set USE_NPM_INSTALL=true
into the path, where I deployed my project from. This instructs Heroku to install your dependencies using npm install instead of npm ci, which is the default! From Heroku dev center:
"Heroku uses the lockfiles, either the package-lock.json or yarn.lock, to install the expected dependency tree, so be sure to check those files into git to ensure the same dependency versions across environments. If you are using npm, Heroku will use npm ci to set up the build environment."
I have a job in Hudson server A which builds an artifact and deploys it to Nexus. I have another job in a completely separate Hudson server B which needs to download the artifact and deploy it. This job is normally run manually, and the person running it needs to indicate which version of the artifact to deploy - they may not always want to deploy the latest version (e.g. to roll back to a previous known good version).
Currently, I achieve this by using a parameterized build, and require the user to pass in the artifact version number; the job then uses the Execute shell build step to run wget on a URL constructed using the parameter. This is error prone.
Ideally I'd like a plugin that lets the user browse the artifact versions in the Nexus repository and pick and choose the one to deploy, but I'm open to other suggestions. A plugin that also handles the download would be nice, but I can live without it as long as I can still get a string that I can use in shell commands.
I've looked through the available Hudson & Jenkins plugins around Maven style artifact repositories, but they all seem more concerned with pushing artifacts into repos rather than getting them back down.
I'm using Hudson's "Copy Artifact" in other jobs, to get artifacts from other Hudson jobs on the same server, but this doesn't work across different Hudson servers, which is why I've turned to Nexus (which we're already using anyway).
Does anyone have any suggestions?
I recommend using rundeck to execute your deployments.
There is a rundeck plugin for Nexus that enables rundeck to display a pull down menu of available versions in Nexus.
There is a rundeck plugin for Jenkins that can be used to invoke deployments using rundeck and kick-off post deployment jobs (like integration testing) inn Jenkins.
Oure teamcity server (6.5) configured to checkout sources from SVN. For some build proceess cases I need checkout previous successfully builded version(revision). Can teamcity do this? And if can, how to configure checkout?
It sounds like you're looking for TeamCity Snapshot Dependencies, with the "Only use successful builds from suitable ones" option.
You'd end up with two build configurations:
Performs initial builds on commit to SVN
Has a snapshot dependency on #1, so when this build runs (either automatically - via a trigger - or manually), it grabs the same sources as the last successful build of #1.
Both of the build configurations would use the same VCS root.