Cannot send commands to remote machine using ssh2-python package - handshake

Problem
Hello my problem is that I want to use the ssh2-python package to remotely read a a bunch of files, but I can't seem to send commands to the remote host machine.
Originally I started with the paramiko package and I did get that to work, but I am dealing with a lot of large memory files (which is why I can't bring them to the local machine) and it is a bit too slow. I am currently running Python 3.6.3 & ssh2-python 0.18.0.post1 and have tried changing versions of ssh2-python, but it didn't help.
Code
import socket
from ssh2.session import Session
host_ip=socket.gethostbyname('hostname')
sock=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host_ip,22))
session=Session()
session.handshake(sock)
print(session.userauth_list('username'))
session.userauth_password('username','password')
channel=session.open_session()
channel.execute('echo Hello')
Code Prints the Following
0
['publickey', 'gssapi-keyex', 'gssapi-with-mic', 'password']
0
0
Expectation/Thoughts
I expected the code to print Hello, but instead it just printed 0. It also printed 0 after the handshake and after the call to the authentication method and I have no idea why. It seems like I am in contact with the remote machine as it did print out which authentications it would take, but it doesn't appear to me that I am actually logged in and can do anything. I would really like to use this package as from what I read online it is significantly faster paramiko, (alternatives would be good to) but I can't seem to figure out what is going on here.
Please help and thanks in advance!

You may in fact be connected and executing commands, but channel.execute('ls') returns '0' (it's exit/status code).
If you want to read your response from the server:
channel.execute('echo Hello')
size, data = channel.read()
while size:
size, dt = channel.read()
data += dt
print(data.decode())
The API documentation for ssh2-python is rather sparse, but the examples should get you through some of the basics: https://github.com/ParallelSSH/ssh2-python/tree/master/examples
A complete version of the above is in example_echo.py

Related

Python code in Google Cloud function not showing desired output

I have the following lines of python code
import os
def hello_world():
r=os.system("curl ipinfo.io/ip")
print (r)
hello_world()
Shows the desired output when executed from command line in Google Cloud Shell but seems there is a 0 at the end of IP Address output
$ python3 main2.py
34.X.X.2490
When I deployed the same code in Google CLoud function it is showing OK as output
I have to replace the first line of code in GCF as follows to make it deploy.
def hello_world(self):
Any suggestion so that GCF displays the desired output which is the output of curl command?
Your function won't work for 2 reasons:
Firstly, you don't respect the HTTP Cloud Function Python function signature:
def hello_world(request):
....
Secondly, you can't use system call. In fact not exactly, you can perform system call, but, because you don't know which package/binaries are installed, you can't rely on this. It's serverless, you don't manage the underlying infrastructure and runtime environment.
Here you made the assumption that CURL is installed on the runtime image. Maybe yes, maybe not, maybe it was, maybe it will be remove in future!! You can't rely on that!!
If you want to manage you runtime environment, you can use Cloud Run. You will manage your runtime environment, and you can install what you want on it and then you are sure of what you can do.
Last remarks:
note: instead of performing a CURL, you can perform a http get request to the same URL to get the IP
Why do you want to know the outgoing IP? It's serverless, you also don't manage the network. You will reach the internet through a Google IPs. It can change everytime, and other cloud functions (or cloud run), from your projects or project from others (like me), are able to use the same IPs. It's Google IPs, not yours! If it's your requirement, let me know, there are solutions for that!

STM32 StdPeriph library USART example

I downloaded Stdperiph library and i want to make USART example run on STM32F4 - Discovery. I chose STM32F40_41xxx workplace, added stm32f324x7i.c file and compiled without any errors.
Issue is that I cant receive expected message in my terminal (using Hercules), also when I check RxBuffer it is receiving some bytes but not that I sent.
I checked baudrate, wordlength, parity several times. Do you have any idea what could I do wrong?
USART conf:
USART_InitStructure.USART_BaudRate = 9600;
USART_InitStructure.USART_WordLength = USART_WordLength_8b;
USART_InitStructure.USART_StopBits = USART_StopBits_2;
USART_InitStructure.USART_Parity = USART_Parity_Odd;
USART_InitStructure.USART_HardwareFlowControl = USART_HardwareFlowControl_None;
USART_InitStructure.USART_Mode = USART_Mode_Rx | USART_Mode_Tx;
STM_EVAL_COMInit(COM1, &USART_InitStructure);
Thank you.
First of all if you want to use hihg level abstraction libraries stop using obsolete SPL and start using HAL. Install the Cube. Generate the code - import into your favorite IDE and compile. Should work.
Your code does not show anything as USART clock may be net enabled as well as GPIOs. GPIOs may be configured wrong way. You system and peripheral clock may have wrong frequency. There are many more potential problems.

my nodejs script is not exiting on its own after successful execution

I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();

How to Use RCurl or RMongo via HTTP with Authentication and Self Signed SSL to Read in JSON Data

I am using R to write a program and perform some analyses. The data is being captured by an outside vendor with MongoDB in JSON format. They are providing it to me via a URI on port 443, which they want me to query using cURL. They have authentication in place and self signed SSL.
I can authenticate and dump the data via curl in Windows, however to create a long term sustainable solution it needs to all be done within R.
The vendor says that RCurl "should" work but they aren't providing any support and they basically just don't like the idea of using RMongo and have no comment on it (but if we could make it work that would be awesome, in my opinion).
I have the following packages loaded
- ggplot2
- DBI
- rjson
- RJSONIO (I sometimes don't load this one if I'm using rjson, or visa versa)
- RMongo
- rstudio
- RCurl
The self signed certificate caused issues even with curl, but those were resolved by editing settings in Ruby and then launching a cmd shell with Ruby and using curl that way. I'm not sure if the problems in R are related.
When trying to go the RCurl route I end up with commands/errors like this:
x <- getURL("https://xxx.xx.xxx.xxx:443/db/_authenticate", userpwd="xxxx:xxxxx") }{Error in function (type, msg, asError = TRUE) : couldn't connect to host
and when trying to use RMongo I'm even more clueless...
> mongo <- mongoDbConnect("xxx.xx.xxx.xxx")
username = "xxxx"
password="xxxxxxxxxxxxx"
authenticated <- dbAuthenticate(mongo, username, password)
Feb 25, 2013 4:00:09 PM com.mongodb.DBTCPConnector fetchMaxBsonObjectSize
WARNING: Exception determining maxBSON size using0
java.io.IOException: couldn't connect to [/127.0.0.1:27017] bc:java.net.ConnectException: Connection refused: connect
at com.mongodb.DBPort.open(DBPort.java:224)
at com.mongodb.DBPort.go(DBPort.java:101)
at com.mongodb.DBPort.go(DBPort.java:82)
at com.mongodb.DBPort.findOne(DBPort.java:142)
at com.mongodb.DBPort.runCommand(DBPort.java:151)
at com.mongodb.DBTCPConnector.fetchMaxBsonObjectSize(DBTCPConnector.java:429)
at com.mongodb.DBTCPConnector.checkMaster(DBTCPConnector.java:416)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:193)
at com.mongodb.DBApiLayer$MyCollection._find(DBApiLayer.java:303)
at com.mongodb.DB.command(DB.java:159)
at com.mongodb.DB.command(DB.java:144)
at com.mongodb.DB._doauth(DB.java:503)
at com.mongodb.DB.authenticate(DB.java:440)
at rmongo.RMongo.dbAuthenticate(RMongo.scala:24)
Error in .jcall(rmongo.object#javaMongo, "Z", "dbAuthenticate", username, :
com.mongodb.MongoException$Network: can't call something
Feb 25, 2013 4:00:10 PM com.mongodb.DBPortPool gotError
WARNING: emptying DBPortPool to 127.0.0.1:27017 b/c of error
java.io.IOException: couldn't connect to [/127.0.0.1:27017] bc:java.net.ConnectException: Connection refused: connect
at com.mongodb.DBPort._open(DBPort.java:224)
at com.mongodb.DBPort.go(DBPort.java:101)
at com.mongodb.DBPort.go(DBPort.java:82)
at com.mongodb.DBPort.call(DBPort.java:72)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:202)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:303)
at com.mongodb.DB.command(DB.java:159)
at com.mongodb.DB.command(DB.java:144)
at com.mongodb.DB._doauth(DB.java:503)
at com.mongodb.DB.authenticate(DB.java:440)
at rmongo.RMongo.dbAuthenticate(RMongo.scala:24)
any help would be greatly appreciated!
I had an issue in the past with RCurl where I needed to explicitly point it toward the security certificates to get it to work okay. I ended up needing something like this:
out <- postForm("https://url.org/api/",
token="IMATOKEN",
.opts=curlOptions(cainfo="C:/path/aaa.crt"))
I had manually exported the certificate I needed to get that working.
Also, it kind of looks like you should be doing a POST request given that URI, not a GET. Try the postForm() command, maybe?
EDITED TO ADD:
Okay, I think things might be a little more clear if we stepped back a second. Is your goal to get some file from a specific URL (basically, doing a wget but from within R)? Or is your goal to submit a form that subsequently returns the data you need?
IF you are just trying to get something that is behind basic (and also fairly INSECURE) HTTP authentication, you should do two things:
Tell your data provider to use a more secure option
Use the getURL() option as shown (using the www.omegahat.org example you posted about):
Code:
getURL("http://www.omegahat.org/RCurl/testPassword/",.opts=list(userpwd="bob:welcome"))
OR
getURL("http://bob:welcome#www.omegahat.org/RCurl/testPassword/")
Now, if you need to submit a form to get the data, you would generally pass authentication tokens, etc, as parameters (so, in the example above, `token='.

APPFabric client communication error?

I have configured 2 AppFabric instances and try to connect from a test client to the cache.
At first, I had trouble establishing the cache using the DataCacheFactory, but after opening the 22233-22235 ports in the firewall I have managed to get the cache using the DataCacheFactory.
As soon as I try to use the cache for a very small object (using a simple get), I get the following with a null InnerException:
ErrorCode:SubStatus:The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown.
I don't believe it's the MaxBufferSize issue (I also modified the transportProperties in the config just to make sure), but on the other hand - I'm able to get the cache, which I believe should indicate that the client can communicate with the server. So what is it? -How can I get more details on this issue?
Thanks in advance,
Nir.
Got this to work!
All I needed to do was just to add the host names, as appear in ClusterConfig file to the hosts file of the client, and that's it!
Hope that helps anyone,
Nir.