Following this post, I'm trying to transfer code from my local machine to the compute engine but I forgot my passphrase.
Can someone tell me how to retrieve it.
I've searched around for solutions but found nothing.
Remove your old keys and try again, that is
rm ~/.ssh/google_compute_engine*
and then
gcloud compute ssh my_vm_name --zone zone_vm_is_in
this below works as stated by #cherba
rm ~/.ssh/google_compute_engine*
after using this command you can then reset the password to the ssh passphrase or you can simply hit enter to leave no password. i am supplying this answer because if you use the command and then attempt to ssh into an instance it takes a few seconds for the keys to propagate. i wanted to add this as a comment but didn't have enough reputation points on stack overflow.
Related
So once a week or a month we do an update to many server machines.
Sometimes a Git pull is enough, some times an SVN update, sometimes there are changes to the database. Or a combination of those. Also there is this project that has many little servers that have a very simplified version of our system and very unreliable internet. Sometimes it might be done from one of the servers, sometimes from the local working computers.
I would like to do our work a bit easier by going through all our servers and doing the appropriate actions. I have found a couple of useful Perl packages: Net::SSH::Perl and Net::SSH::Expect;
The Net::SSH::Perl fails me.
Also I have not found out how to use its cmd command in succession. For example:
my $ssh = Net::SSH::Perl->new($host);
$ssh->login($user, $pass);
my($stdout, $stderr, $exit) = $ssh->cmd('cd web/scripts && ls -la ');
warn Dumper $stdout;
my($stdout, $stderr, $exit) = $ssh->cmd('ls -la');
warn Dumper $stdout;
The 2 ls -la commands return different results. As far as I can understand (and is explained in the documentation) the ssh executes the command and then exits. It is said that I could use SSH 2 version of the SSH protocol (or something like it) and should not have this problem, but it persists (or I don't understand how to use it).
Also if a password is asked of me (for example if I run mysql -u user -p) I am unable to provide it. I've tried it with the $ssh->cmd($cmd, [ $stdin ]) , second option but to no result. The mysql is just an example. I might wish to add an IP to /etc/hosts and be prompted for su password or svn update a file and be asked for my SVN password. I know that most of those processes can be configured to not ask for passwords but we want them to ask for password.
The $ssh->shell option seems like would do the trick but when I do something like this:
$ssh->shell();`ls -la`;
the backslashed command doesn't go to the ssh-shell. Actually I have no idea where it goes or if does anything.
The Net::SSH::Expect fails me when there is a bad internet connection.
For MySQL purposes I have created a Perl script that makes connections with each different host and does the changes I want. But it would be great if I could make it all in one script.
I would be very grateful to gain some more understanding on this topic.
This is what Ansible is made for. It uses SSH to communicate to multiple hosts, and provides a decent variable scoping system and flow control for applying various tasks to various hosts.
You can build your own configuration management with perl, but ansible with raw commands (which don't require python on the remote system) or more full fledged modules (which do) is already implemented and takes the same approach. Do yourself a favor- don't reinvent this wheel.
Ansible is far from perfect, but it covers your use case very well.
I personally run it from a docker container because python's package installation story is almost as bad as Perl's :P
On a pure perl basis you also have Rex, see rexify Web site
Which is a kind of Ansible, it got ssh, parallel job and plenty of features but more perly.
OK, it's simpler than Ansible, but it worth a try.
In Google compute engine I do not have sudo ability on my VM.
According to the documentation and other threads on this topic, it should automatically be created when I SSH in from the Google Console. It worked this way for a week or two and now prompts for a password (I also rebuilt a vm that did this same thing a couple of weeks ago).
I have tried letting my keys expire, opening and closing a new session, and external ssh and they all display the same problem.
Here is a screenshot from a new browser instance:
You can try using “sudo -s” command from any user to directly switch to root and use sudo access. Also root password can be changed by using “passwd root” command after sudo access have been got.
That being said, the screen that you are getting is the lecture that is set to be displayed every time sudo is used. This can be changed in /etc/sudoers file by setting Defaults lecture=once as explained in this article. If the lecture is set to once, it will force the sudo command to show the prompt you are getting for the first time only.
we are running a node.js server that needs to connect with a mySQL database. We hosted our database on amazon RDS, but now we've moved it over to Google SQL and we're having trouble with the server randomly dropping the connection after 10 minutes.
Apparently that's a feature, not a bug, and the workaround is setting a low tcp keepalive in the machine we're connecting from, as described here: https://cloud.google.com/sql/docs/diagnose-issues
The code should be:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
Unfortunately, when running the code I get:
sysctl: cannot stat /proc/sys/net/ipv4/tcp_keepalive_time: No such file or directory
We have root access to this machine, but we can't even manually creating a file named tcp_keepalive_time in this folder.
We're extremely puzzled, as the solution comes from the official google Cloud SQL docs and should therefore work as described.
Has anyone got any insights to share? Thanks in advance :)
Auto-answer:
You can't access the filesystem as admin (apparently) from the web/cloud console.
We used gcloud auth (from the gcloud SDK) to log in from the therminal, puttygen to create a SSH key and then putty to SSH into the machine from a proper ssh client (instead of the cloud SSH console), and sure enough it worked.
Weird, hope this helps someone else with the same issue!
I have created a Rails application and hosted it on EngineYard. Now I want to manually insert one record into my database.[Database: MYSQL]
How can I access EngineYard's database from my local machine?
P.S: I have came across this article and I can't infer proper explanation from that. I have even searched for video tutorials and can't find any. Please help me.
Atlast I got the answer for my question after spending a lot of time on this and getting support from EngineYard team.
First you need to create a local SSH Key Pair by following this article.
Next you need to add that SSH Key (something like id_rsa.pub or id_dsa.pub) at https://cloud.engineyard.com/keypairs
Then you need to execute this command in your local terminal
ssh deploy#ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com
where deploy is the common and default username for your database and ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com is your hostname
I am writing a bash script that I plan to execute via cron. In this script, I want to execute a command against a MySQL database, something like this:
$ mysql -u username -ppassword -e 'show databases;'
For clarity and those not familiar with mysql, the "-u" switch accepts the username for accessing the database and the "-p" is for password (space omitted purposely).
I am looking for a good way to keep the username/password handy for use in the script, but in a manner that will also keep this information secure from prying eyes. I have seen strategies that call for the following:
Keep password in a file: pword.txt
chmod 700 pword.txt (remove permissions for all except the file's owner"
Cat pword.txt into a variable in the script when needed for login.
but I don't feel that this is very secure either (something about keeping passwords in the clear makes me queasy).
So how should I go about safeguarding password that will be used in an automated script on Linux?
One way you can obfuscate the password is to put it into an options file. This is usually located in ~/.my.cnf on UNIX/Linux systems. Here is a simple example showing user and password:
[client]
user=aj
password=mysillypassword
The only truly safe way to guard your password is to encrypt it. But then you have the problem of safeguarding the encryption key. This problem is turtles all the way down.
When the good people who build OpenSsh tackled this problem, they provided a tool called ssh-agent which will hold onto your credentials and allow you to use them to connect to a server at need. But even ssh-agent holds a named socket in the filesystem, and anybody who can get access to that socket can act using your credentials.
I think the only two alternatives are
Have a person type a password.
Trust the filesystem.
I'd trust only a local filesystem, not a remote mounted one. But I'd trust it.
Security is hell.
Please see the doc for some guidelines. an extra step you can take is to restrict the use of ps command for normal users, if they have the permission to access the server.
I'll agree with Norman that you should have someone type the password. If you just supply the -p flag without an accompanying password, it will prompt the user for it.