I want to create a new node.js app in PhpStorm but it can't build it as npm requires root to install. I can't seem to find a way to make PhpStorm run npm as su.
Is there a way to do so?
Installing with sudo will likely result in more issues. I'd rather suggest making sure that your user (the one you run WebStorm with) has the right privileges in ~/.npm, and give it the needed permissions ( 'sudo chown -R $USER ~/.npm' )
Related
I am trying to create a login page for a website using perl CGI. In an already existing script the database handle is connect("DBI:mysql")
When I run the command
$ mysql -u root -p
The linux server shows command not found.
I am new to modules DBI and CGI in perl. So i am confused if mysql does not exist on the linux how should i create a database and connect to it using DBI?
There could well be many steps to get through here in order to get this all working.
But, question one is: which platform are you running on? Your command prompt makes me think it's Linux, but which distribution and which version of that distribution.
It looks very much like you don't have the MySQL client packages installed. The easiest way to do that will depend on your answer to question one.
On a Debian-based system (which includes Ubuntu), you can run:
$ sudo apt-get install mysql-client
On a RedHat-based system (which includes things like Fedora and Centos), you can run:
$ sudo yum install mysql-client
If that last command tells you that yum can't be found, then try:
$ sudo dnf install mysql-client
Once you have the client installed, we can move on to the next step :-)
I am trying to deploy my functions on the cloud functions emulator but my terminal throws me this error:
$ functions deploy sendNotifications --trigger-http
{ Error: EACCES: permission denied, mkdir '/usr/local/lib/node_modules/#google-cloud/functions-emulator/logs'
at Error (native)
at Object.fs.mkdirSync (fs.js:922:18)
at Object.assertLogsPath (/usr/local/lib/node_modules/#google-cloud/functions-emulator/src/emulator/logs.js:31:10)
at new Controller (/usr/local/lib/node_modules/#google-cloud/functions-emulator/src/cli/controller.js:84:32)
at Object.exports.handler (/usr/local/lib/node_modules/#google-cloud/functions-emulator/src/cli/commands/deploy.js:124:22)
at Object.self.runCommand (/usr/local/lib/node_modules/#google-cloud/functions-emulator/node_modules/yargs/lib/command.js:231:22)
at Object.Yargs.self._parseArgs (/usr/local/lib/node_modules/#google-cloud/functions-emulator/node_modules/yargs/yargs.js:989:30)
at Object.Yargs.self.parse (/usr/local/lib/node_modules/#google-cloud/functions-emulator/node_modules/yargs/yargs.js:533:23)
at Object.main (/usr/local/lib/node_modules/#google-cloud/functions-emulator/src/cli/main.js:69:6)
at getProjectId.then (/usr/local/lib/node_modules/#google-cloud/functions-emulator/bin/functions:100:27)
errno: -13, code: 'EACCES', syscall: 'mkdir', path:
'/usr/local/lib/node_modules/#google-cloud/functions-emulator/logs' }
I have cd-ed to the functions directory and checked that I am logged in to firebase and selected the right project Id. Am i doing something wrongly?
Overview
The proper way to fix this is by changing the npm global directory to one that does not require root permission to install and run.
You really should not be requiring sudo to npm install -g.
Likewise to functions start.
Cleansing
Given that you have installed functions, presumably using sudo npm install -g #google-cloud/functions-emulator, you now need to first uninstall it equally with sudo: sudo npm uninstall -g #google-cloud/functions-emulator.
Why uninstall it? You currently have functions installed into a directory where root permissions are required to write. It does require write permission to log file as you can see from the error.
I would suggest uninstalling the other npm packages which you may have installed using sudo, except npm. You can find out what you have using sudo npm list -g --depth=0.
Changing npm global directory
I would suggest going with Option 2 described on https://docs.npmjs.com/getting-started/fixing-npm-permissions
Copy pasting solution from link above:
Make a directory for global installations:
mkdir ~/.npm-global
Configure npm to use the new directory path:
npm config set prefix '~/.npm-global'
Open or create a ~/.profile file and add this line:
export PATH=~/.npm-global/bin:$PATH
Back on the command line, update your system variables:
source ~/.profile
Install global without sudo
Now that your machine has been cleansed and npm global has been configured to a directory that does not require root permissions, you should be able to install and run without sudo.
Had the same issue. For me it did not showed the permission log also..The issues is with the permissions. Try to the command with super user.
Step 1.
Start the gcloud functions
sudo functions start
Step 2.
Deploy the helloworld function which was exported.
sudo functions deploy helloworld --trigger-http
This is obviously very late for the original post. Hope this helps other who has the same error in the future.
I had the same issue turn out I hadn't installed that the requirements properly. Any one who steps in here can look at Error: EACCES: permission denied #195.
Main note of this error install all requirements properly.
So when I try to use:
fopen("sometext.txt", "w") or die("blahblahbla");
I keep on getting the following message:
failed to open stream: Permission denied". I have looked for other
answers on this site and none of them actually work.
Why is this doing this? Can somebody recommend a fix?
Do I have permission to create files in my directory? I get a bunch of advice on using chmod or changing the "file access", but how do you do this? They never explain that, just "oh use this or that".
If you have Terminal Access just fire a command in file's folder:
sudo chmod 777 sometext.txt ( For security reasons, later use correct chmod for permission)
if you dont have, you can modify File Attributes in your FTP client. ( Tick all fields (Execute-read-write) for Owner, Group, Everyone).
I hope it will solve your problem.
First, make sure you are in the apache group (check it with id username), then add your user to group apache (sudo usermod -G apache -a username) and then make sure the directory is in the group apache (check it with ls -l directory. I suppose the directory is /var/www/html or /srv/whatever, but XAMPP has its own. If not, do a sudo chgrp apache directory. Also, the directory must be writable by group members (chmod g+w directory).
Obviously in the apache configuration must be the apache user and group. If they doesn't exist, create them (sudo groupadd apache and sudo useradd apache).
P.S: chmod 777 is evil! It's better to be in the apache group and avoid making your file be edited by someone else!
I am having a heck of a time installing NUPIC on ubuntu-13.04-desktop-amd64.iso [within Exsi 5.1].
I've followed the instructions on https://github.com/numenta/nupic/wiki/Install-Nupic-on-ubuntu-13.04. The install fails at:
pip install -r external/common/requirements.txt
with the following error:
error: could not create '/usr/local/lib/python2.7/dist-packages/asteval': Permission denied
It works if I use sudo pip install... but just fails on
$NUPIC/build.sh
I also had to add
sudo apt-get install python-dev
sudo apt-get install python-numpy
to even get it to the "pip install..." point.
Any assistance would be appreciated.
Thanks,
Neil
I had the same problem trying to install nupic.
I was following these instructions:
https://github.com/numenta/nupic/wiki/Running-Nupic-in-a-Virtual-Machine
The problem you may be facing is that your user does not have read/write access to the /usr/local/lib/python2.7/dist-packages/asteval folder. You can use the chown command to change that folders ownership to the user you want and give that user read/write access.
With regards to the above instructions, I installed using sudo but failed to realize that the environment variables for root were different than the environment variables for the user I created to install nupic.
Hope this helps,
VS
Thank you for the report. I've created a ticket to address this installation issue.
Neilg, please try these instructions: https://github.com/numenta/nupic/wiki/Installing-NuPIC-on-Ubuntu
The problem you may be facing is that your user does not have read/write access to the /usr/local/lib/python2.7/dist-packages/asteval folder. You can use the chown command to change that folders ownership to the user you want and give that user read/write access.
YOU NEED TO ADD THE OPTION --user after pressed commands
For some some reason brew does not link mysql and it complains about permission.
I chmod the folder to 777 but I am still having the same issues
laptop$ brew install mysql
Error:
mysql-5.5.27 already installed, it's just not linked
laptop$ brew link mysql
Linking /usr/local/Cellar/mysql/5.5.27... Warning: Could not link mysql.
Unlinking...
Error:
Could not symlink file: /usr/local/Cellar/mysql/5.5.27/lib/plugin
/usr/local/lib is not writable. You should change its permissions.
I figured what the problem was.
It was issues with premission and I basically did this
sudo chown -R $(whoami) /usr/local/lib/
I believe You should:
sudo chmod 775 /usr/local/lib/
and make sure You are member of the file's group.
Not really an answer, but a comment that may help those who are pulling out hair chowning and chmoding like crazy and still getting "not writeable" errors at linking. For example, from $ brew doctor -d
Error: /usr/local/lib/pkgconfig isn't writable.
This can happen if you "sudo make install" software that isn't managed by
by Homebrew. If a formula tries to write a file to this directory, the
install will fail during the link step.
I suggest you check the linked file and it's dependencies and either delete them and reinstall via homebrew, or install the package without using homebrew.
In my system this worked perfectly.
chown -R $(whoami) /usr/local/share/
I am trying to give a general answer to the question.
It happens that neither /usr/local/lib/ nor /usr/local/share/ gives error. You should look the exact directory that is not writable. It is mentioned right after "Error: Could not symlink". So execute the command for that directory.
chown -R $(whoami) [/bath/to/your/dir]