Editor's note: The question's original title was "Use npm install to install node modules stored on a local directory", which made the desire to transparently redefine the installation source less obvious. Therefore, some existing answers suggest solutions based on modifying the installation process.
I know this is a simple thing, but I'm quite new to anything in this area so after searching around and constantly finding answers that weren't really what I want I figured I'd just ask directly.
I currently have a process that is run in directory FOO that calls npm install. Directory FOO contains a package.json and a npm-shrinkwrap.json file to specify the modules (bluebird, extend, and mysql in this case but it doesn't really matter) and versions. This all works perfectly fine.
But now instead of reaching out to the internet to get the modules I want to have them stored in local directory BAR and have the process in foo use npm to install them from there. I can not store them permanently in FOO but I can in BAR for reasons outside my control. I know this is relatively simple but I can't seem to get the right set of commands down. Thanks for the help.
Note: This answer originally suggested only redefining the cache location. While that works in principle, npm still tries to contact the network for each package, causing excessive delays.
I assume your intent is to transparently change the installation source: in other words: you don't want to change your package, you simply want to call npm install as before, but have the packages be installed from your custom filesystem location, offline (without the need for an Internet connection).
There are two pieces to the puzzle:
Redefine npm's cache filesystem location (where previously downloaded packages are cached) to point to your custom location:
Note that cached packages are stored in a specific way: the package.json file is stored in subfolder package, and the zipped package as a whole as package.tgz. It's easiest to copy packages from your existing cache to your custom location, or to simply install additionally needed ones while you have an Internet connection, which caches them automatically.
For transparent use (npm install can be called as usual):
By setting the configuration item globally:
npm config set cache '/path/to/BAR'
Note that this will take effect for all npm operations, persistently.
Via an environment variable (which can be scoped to a script or even a single command):
export npm_config_cache='/path/to/BAR'
npm_config_cache='path/to/BAR' npm install
Ad-hoc use, via a command-line option:
npm install --cache /path/to/BAR
Force npm to use cached packages:
Currently, that requires a workaround via the cache-min configuration item.
A more direct feature, such as via an --offline switch has been a feature request for years - see https://github.com/npm/npm/issues/2568
The trick is to set cache-min to a very high value, so that all packages in the cache are considered fresh and served from there:
For transparent use (npm install can be called as usual):
By setting the configuration item globally:
npm config set cache-min 9999999999
Note that this will take effect for all npm operations, persistently.
Via an environment variable (which can be scoped to a script or even a single command):
export npm_config_cache_min=9999999999
npm_config_cache_min=9999999999 npm install
Ad-hoc use, via a command-line option:
npm install --cache-min 9999999999
Assuming you've set cache-min globally or through an environment variable,
running npm install should now serve the packages straight from your custom cache location.
Caveats:
This assumes that all packages your npm install needs are available in your custom location; trying to install a package that isn't in the cache will obviously fail without an Internet connection.
Conversely, if you do have Internet access but want to prevent npm from using it to fetch packages - which it still will attempt if a package is not found in the cache - you must change the registry configuration item to something invalid so as to force the online installation attempt to fail; e.g.:
export npm_config_registry=http://example.org
Note that the URL must exist to avoid delays while npm tries to connect to it; while you could set the value to something syntactically invalid (e.g., none), npm will then issue a warning on every use.
Sample bash script:
#!/usr/bin/env bash
# Set environment variables that set npm configuration items to:
# - redefine the location of the cache folder
# - make npm look in the cache only (assuming the packages are there)
# Note that by doing this inside a script the changes will only take effect
# in the script and NOT persist.
export npm_config_cache='/path/to/BAR' npm_config_cache_min=9999999999
# Now cd to your package and invoke `npm install` as usual.
cd '/path/to/project'
npm install
You might want to try npm link. You could:
download the dependency
run npm link from the dependency's directory
run npm link mycrazydependency from you project
Detail here: https://docs.npmjs.com/cli/link
If a shrink wrap file is present then package.json is ignored. What you need to do is change the URL they are being retrieved from using a find and replace operation like sed .... However I'm not sure changing the URL to a file:/// syntax is valid but give it a go.
Related
I am new to Clojure and not a pro in Javascript. I am watching the free part of the course on Reagent.
Following the instructions on the course's repo, after doing the git clone and the npm install, the author indicates running $ npm run dev. Everything seems to work fine. I can see the app on my http://localhost:3000/.
The favicon with the app's logo and its name is loaded on the corner of the browser's tab:
However, on the bottom of the web page, there is this error message from shadow-cljs:
shadow-cljs - Stale Output! Your loaded JS was not produced by the
running shadow-cljs instance. Is the watch for this build running?
Why is this happening? How should I fix it?
How to guarantee that the watch for this building is running?
Is there a simple command to run on terminal to check this?
Obs. 1: If this is relevant, my operational system is NixOS and this is my config file.
Obs. 2: I am not sure if this question is connected to my previous question on npm and Cider (Emacs IDE for Clojure) that happened while working with this same repo.
It is likely that this is due to you running npm run dev AND cider-jack-in.
I don't use emacs, so I'm not exactly sure what cider-jack-in does, but I believe it launches a new JVM. Since the npm run dev also did that you end up with two running JVMs, which also means two running shadow-cljs instances. That is not ideal and they will start interfering with each other leading to errors such as yours.
So, either you run npm run dev and use emacs to connect to that server. cider-connect or whatever is called should do that.
Or you don't run npm run dev at all and instead only cider-jack-in and then start the watch from the REPL.
Don't forget to first kill all java processes that might be running for that project. As long as there is more than one shadow-cljs process running for the project things will be weird.
This happens to me when I clicked on the build link BEFORE it has compiled. In which case, the link is displaying a previously compiled version, not the live version, and "watch" on code changes doesn't work either. Just wait for your terminal output to say "compiled" before clicking on the link.
In the Angular Component Router documentation I just stumbled over a npm command I have never seen before and I don't understand what is going on:
npm install #angular/router --save
What is the meaning of #angular/router?
Is the whole string a package name? But then I dont find that package when I use the search on npmjs.com.
And also the commandline search does return no such package:
npm search #angular/router
:No match found for "#angular/router"
So is the #angular/ some kind of prefix mechanism in npm? And how does it work?
This is a new feature of NPM called 'scoped packages', which effectively allow NPM packages to be namespaced. Every user and organization on NPM has their own scope, and they are the only people who can add packages to it.
This is useful for several reasons:
It allows organizations to make it clear which packages are 'official' and which ones are not.
For example, if a package has the scope #angular, you know it was published by the Angular core team.
The package name only has to be unique to the scope it is published in, not the entire registry.
For example, the package name http is already taken in the main repository, but Angular is able to have #angular/http as well.
The reason that scoped packages don't show up in public search is because a lot of them are private packages created by organizations using NPM's paid services, and they're not comfortable opening the search up until they can be totally certain they're not going to make anything public that shouldn't be public - from a legal perspective, this is pretty understandable.
For more information, see the NPM docs and the Angular docs.
EDIT: It appears that public scoped packages now show up properly in search!
Basically there are two types of modules on npm, they are -
Global modules - these are modules that follow the naming convention that exists today. You require('foo') and there is much rejoicing. They are owned by one or more people through the npm install XYZ command.
Scoped modules - these are new modules that are "scoped" under an organization name that begins with an # the organisation's name, a slash and finally the package name, e.g. #someOrgScope/packagename. Scopes are a way of grouping related packages together, and also affect a few things about the way npm treats the package.
A scoped package is installed by referencing it by name, preceded by an #-symbol, in npm install:
npm install #myorg/mypackage
see also
http://blog.nodejitsu.com/a-summary-of-scoped-modules-in-npm/
https://docs.npmjs.com/misc/scope
# has different means according to its place where it is in the npm package name.
A package is:
A folder containing a program described by a package.json file.
A gzipped tarball containing (1).
A url that resolves to (2).
A <name>#<version> that is published on the registry with (3).
A <name>#<tag> that points to (4).
A <name> that has a “latest” tag satisfying (5).
A <git remote url> that resolves to (1).
npm install [<#scope>/]<name>
<scope> is optional. The package will be downloaded from the registry associated with the specified scope. If no registry is associated with the given scope the default registry is assumed.
Note: if you do not include the #-symbol on your scope name, npm will interpret this as a GitHub repository instead, see below. Scopes names must also be followed by a slash.
npm install [<#scope>/]<name>#<tag>
Install the version of the package that is referenced by the specified tag. If the tag does not exist in the registry data for that package, then this will fail.
Example:
npm install packagename#latest
npm install #myorg/mypackage#latest
npm install [<#scope>/]<name>#<version>
Install the specified version of the package. This will fail if the version has not been published to the registry.
Example:
npm install packagename#0.1.1
npm install #myorg/privatepackage#1.5.0
npm install [<#scope>/]<name>#<version range>
Install a version of the package matching the specified version range.
Example:
npm install packagename#">=0.1.0 <0.2.0"
npm install #myorg/privatepackage#">=0.1.0 <0.2.0"
What are scoped modules.
All npm packages have a name and these name should be unique. A scoped npm package follows the same rules as other npm package names (URL-safe characters, underscores or no leading dots). When used in package names, scopes are preceded by an # symbol and followed by a slash /, e.g.
#somescope/somepackagename
The npm scoped modules are usually grouped related npm packages together. When you sign up for an npm user account or create an organization. Each npm user/organization has their own scope, and only they and their employees can add packages in your scope. Usually you are granted a scope that matches your user or organization name. You can use this scope as a namespace for related packages.
As a npm user you don't have to worry about someone taking your package name ahead of you. Thus using scope module is also a good way to organize the npm packages for an organizations.
Advantages of using scoped packages:
Scoped packages allows organizations to manage the private packages.
The Scoped package name only has to be unique to the scope in which it is published in, not the entire npm registry.
Usually organizations choose to keep their scoped packages private and they don't show up in public search for various reasons.
Currently, when you Get Latest from source control, and the bower.json or package.json files have changed, you still need to open and make a minor change to the file and re-save it in order for VS to be aware of the change and execute NPM or bower and pull updates. Ideally, it would detect the change and execute it immediately upon getting the latest .json files. I can understand the case for not wanting this to be the default behavior, but without this, our entire dev team needs to be notified and perform the extra steps whenever a .json file change is checked in (fairly often).
Is there an environment setting in VS that impacts this, or a feasible workaround that anyone is aware of?
No, there is no such setting in VS IDE.
As you figured out that when you save any changes to the package.json or bower.json file, Visual Studio automatically install or restore all packages. However, the auto check is not triggered when you get file from TFS version control.
You can, however, create a licenser to license to the GettingEventHandler event. Once the event is triggered, run the scripts to install the updates:
npm install -g bower-check-updates
npm-check-updates -u
bower-check-updates -u
npm install
I've gotten some NPM package from a third party who is developing under Mac OSX. Their build can split into either development or production using "scripts" object in package.json. For example:
"scripts": {
"build": "NODE_ENV=dev node make.js --build",
"build-prod": "NODE_ENV=prod node make.js --build",
}
Under Unix, one can run either "npm run build" or "npm run build-prod" to build either directory (naturally, there are some conditional statements in make.js).
Of course, it does not work under Windows - I had to change the commands similar to this:
"scripts": {
"build": "set NODE_ENV=dev&& node make.js --build",
"build-prod": "set NODE_ENV=prod&& node make.js --build",
}
(Please note that it was important not to put a space before the '&&' - otherwise the environment variable was created with an extra white space in it which ruined all those comparisons in make.js).
However, I would like to have some universal source tree which would work under either Unix or Windows without editing. Could you please give some ideas on how to conditionally split the build depending on the OS?
The question is pretty old, but for these who faces the problem nowadays, npm starting from version >=5.1.0 supports setting shell for processing scripts. By default on Windows, npm internally uses cmd.exe for running scripts, even if npm command itself is typed in git-bash. After setting git-bash as shell, scripts that use bash syntax work normally on Windows:
npm config set script-shell "C:\\Program Files\\Git\\bin\\bash.exe"
Here one needs to substitute his correct path to git-bash executable.
I have been thinking for a while, but I doubt there is any aesthetic solution using these tools, to get the desired effect.
If you are able to influence the change in make.js, I would rather change this file to accept prod or dev as argument, example: node make.js --build=dev. With default value, to ensure backwards compatibility.
Using only npm and not modifying make.js, I could think of only running another JavaScript code, which would change environment variable, and then call make.js.
That will look something like:
"build": "node middleman.js"
Middleman.js file could then use child_process or another module to set variable and execute node make.js file.
If you do not want to create an extra file, you can then embed all the JavaScript inside the package.json using:
"build": "node -e 'my code'"
Be warned, that running "node -e 'process.env[\'NODE_ENV\']=\'dev\' && node make.js" will not work, as process.env sets variable in local process, not global (i.e. does not export to the system).
Not the direct solution, but for sake of best practices, make it work different.
I am setting up Jenkins to replace our current TeamCity CI build.
I have created a free-style software project so that I can execute a shell script.
The Shell script runs the mvn command.
But the build fails complaining that the 'mvn' command cannot be found.
I have figured that this is because Jenkins is running the build in a different shell, which does not have Maven on it's path.
My question is; how do I add the path so 'mvn' is found in my Shell script? I've looked around but can't spot where the right place might be.
Thanks for your time.
I solved this by exporting and setting the Path in the Jenkins Job configuration where you can enter shell commands. So I set the environments variable before I execute my Shell script, works a treat.
Some possible solutions:
You can call maven with an absolute path
You configure a global environment variable in the jenkins system settings with the absolute path to your maven instance, and use this in your script call (if you use the inline shell script, I don't know if those are substituted to a called script, you have to test)
You use a maven project and configure your maven instance in the jenkins system settings
ps.: Usually /bin/sh is chosen from Jenkins, if you want to switch to eg. bash, you can configure this in the jenkins system settings, in case you want to configure global environment variables.
You can use envInject plugin. It's very powerful.
I use it to install rbenv. And it can inject environment variables into your current job.
Another option to Dags suggestion is that if you're only using a single version of maven, on each slave server you could do either;
* add PATH=${PATH}:
* symlink mvn into /usr/bin with; sudo ln -s /usr/bin
I'm not at a Jenkins box at the moment, but I can find some more detailed examples if you'd like.
Jenkins is using sh by default and not bash.
This is my first time defining a jenkins maven job, and I also followed soem regular maven instructions (for running from command line...), and tried to update ~/.bashrc with M2_HOME, M2, PATH, but it didn't work because jenkins used sh and not bash. Then I found out that there is a simpler and better way built into jenkins.
After installing maven, I was supposed to configure my maven installation in jenkins.
To configure your maven installation in Jenkins:
login to jenkins web console
click Manage Jenkins --> Configure System
Under Maven, click the "Maven Installations..." button
a. Give it some name
b. and under MVN_HOME set the path to where you installed maven, for example "/usr/local/apache-maven/apache-maven-3.0.5"
Click Save button
Define a job with maven target
edit your job
Click "Add build step"
on Maven Version, enter the name you gave your maven installation (step #4 above)
set some goal like clean install