How to automaticaly upload .json on daily basis to Firebase - json

Our company generates a .json on daily basis containing the data for our mobile app which has the database on Firebase. We upload the data to it manualy, but we've been doing it for couple of months now and it is pain in the butt.
Our suplier tried to create a uploader which works with this cmdlet gcsupload-windows.exe /key:"C:\Data\myapp-test-sdk.json" /bucket:"myapp-test.appspot.com" /dst:"Import" "C:\Data\json\*.json" and they created it based on https://github.com/googleapis/google-cloud-go/tree/master/storage, but this is not my expertiese, so I can only tell you what it does.
DevOps created a Win Core server for me, they said it is enough, so I am reliant only to CMD...
Once I go to CMD and type the command outside of the domain it does upload the .json to the server so I am sure that the Uploader and the Command are correct and working properly, BUT when I am in the domain it goes haywire and the cmd replies Failed to get bucket metadata and so on..
CMD Input: PS: D:\Uploader> .\gcsupload-windows.exe /key:"D:\Firebase_Key\myapp-test.json" /bucket:"myapp-test.appspot.com" /dst:"Import-Test" /src:"D:\myapp_json"
CMD Output: Failed to get bucket metadata: Get "https://storage.googleapis.com/storage/v1/b/myapp-test.appspot.com?alt=json&prettyPrint=false&projection=full" oauth2: cannot fetch token: Post "https://oautha.googleapis.com/token" dial tcp 216.58.201.74:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
Proxy is set correctly on my machine and the trafic is all accepted on proxy server.
The suplier said that it might be something with gRPC, but again, this is not my expertiese, so please, wise stackoverflowers ask me, shoot me, just please help me with this. Thank you

For anyone still interested in this...
We stumbled across the solution (more like a workaround) by accident. I tried everything I could find on the internet, I had to add this
$proxyString = "http://proxy:8080" $proxyUri = new-object System.Uri($proxyString) [System.Net.WebRequest]::DefaultWebProxy = new-object System.Net.WebProxy ($proxyUri, $true) [System.Net.WebRequest]::DefaultWebProxy.Credentials=[System.Net.CredentialCache]::DefaultCredentials
just to pick up IE settings, which kind of helped, but it was not enough. Some said to modify the Uploader and give it credentials with which I should authenticate against proxy, but this looked unsafe.
I was helpless, tried invoke-webrequest http://google.com just to be sure if I am connecting to the proxy, tried the command for the uploader again, and voila - it worked!
It looks like invoke-webrequest is doing something like telling everything behind it to work with the proxy, anyway it works. So my whole script is looking like this:
$proxyString = "http://proxy:8080"
$proxyUri = new-object System.Uri($proxyString)
[System.Net.WebRequest]::DefaultWebProxy = new-object System.Net.WebProxy ($proxyUri, $true)
[System.Net.WebRequest]::DefaultWebProxy.Credentials=[System.Net.CredentialCache]::DefaultCredentials
invoke-webrequest http://google.com
$Env:HTTP_PROXY = "http://proxy:8080"
D:\Uploader\gcsuploader.exe /key:"D:\Firebase_key\myapp-prod-firebase-adminsdk.json" /bucket:"myapp-prod.appspot.com" /dst:"Import" "D:\Myapp_json\*.json" >> "c:\Uploader_logs\uploader $(get-date -f yyyy-MM-dd).log" 2>&1

Related

Cannot send commands to remote machine using ssh2-python package

Problem
Hello my problem is that I want to use the ssh2-python package to remotely read a a bunch of files, but I can't seem to send commands to the remote host machine.
Originally I started with the paramiko package and I did get that to work, but I am dealing with a lot of large memory files (which is why I can't bring them to the local machine) and it is a bit too slow. I am currently running Python 3.6.3 & ssh2-python 0.18.0.post1 and have tried changing versions of ssh2-python, but it didn't help.
Code
import socket
from ssh2.session import Session
host_ip=socket.gethostbyname('hostname')
sock=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host_ip,22))
session=Session()
session.handshake(sock)
print(session.userauth_list('username'))
session.userauth_password('username','password')
channel=session.open_session()
channel.execute('echo Hello')
Code Prints the Following
0
['publickey', 'gssapi-keyex', 'gssapi-with-mic', 'password']
0
0
Expectation/Thoughts
I expected the code to print Hello, but instead it just printed 0. It also printed 0 after the handshake and after the call to the authentication method and I have no idea why. It seems like I am in contact with the remote machine as it did print out which authentications it would take, but it doesn't appear to me that I am actually logged in and can do anything. I would really like to use this package as from what I read online it is significantly faster paramiko, (alternatives would be good to) but I can't seem to figure out what is going on here.
Please help and thanks in advance!
You may in fact be connected and executing commands, but channel.execute('ls') returns '0' (it's exit/status code).
If you want to read your response from the server:
channel.execute('echo Hello')
size, data = channel.read()
while size:
size, dt = channel.read()
data += dt
print(data.decode())
The API documentation for ssh2-python is rather sparse, but the examples should get you through some of the basics: https://github.com/ParallelSSH/ssh2-python/tree/master/examples
A complete version of the above is in example_echo.py

boto3 cache session token not working

Either there's something borked in my environment or this functionality is broken. It appears it worked at one point according to the blog I followed:
What I'd like to do is run my script, enter the MFA. Then be able to run it again without entering MFA making use of cached session token.
The samples I've seen are:
session = boto3.Session(profile_name='w2-cf3')
ec2_client = session.client('ec2',region_name='us-west-2')
I'm then prompted for my mfa:
Enter MFA code:
I enter it and my code runs. At this point, my session token should be cached, that's how it works in awscli. However, on the second run, instead of reading in my cached session for this profile, boto3 disregards and prompts me again for my MFA:
Enter MFA code:
Here's what my ~/.aws/config file looks like:
[profile default]
region = us-west-2
output = json
[profile w2-cf3]
region = us-west-2
source_profile = default
role_arn = arn:aws:iam::<accountid>:role/<role>
mfa_serial = arn:aws:iam::<accountid>:mfa/<user>
Here's what my ~/.aws/credentials file looks like:
[default]
aws_access_key_id=<access key>
aws_secret_access_key=<secret key>
Expected: I expected the second time I run my script is would make use of the cached session token like it does in awscli. The session token provided by AWS lasts 1 hour.
This is discussed in the GitHub repo for botocore here and a pull request has been submitted too and being discussed.
You're correct, this seems it was working back in 2014 but has been somehow removed, from the discussion on the thread mentioned above, this should be re-implemented soon, follow the pull request thread and make sure to upgrade when it is being release.

How to ping from Zabbix agent?

Is it possible to ping from Zabbix agent and pass that data into Zabbix server? I would like to be able to get response time from the agent.
I read that it is possible by using fping, would be great if someone could guide me to the correct path.
Thank you,
Rijath Mohammed
While that is not currently available out of the box, you can implement such a functionality using a feature called "user parameters". This forum thread has a simple example:
UserParameter=myping[*],/etc/zabbix/fping -q $1;echo $?
Although for you the path to fping is likely to be /usr/sbin/fping or /usr/bin/fping.
You can read more about user parameters in the official manual: https://www.zabbix.com/documentation/3.0/manual/config/items/userparameters .
While I haven't ever configured that, it would be similar on Windows - see this forum thread for some inspiration.
And if you would like to see this feature implemented out of the box, make sure to vote on this feature request.
Got it working using the below powershell script, :)
$Test = test-connection google.com -count 1
$Test.responsetime
This will just return the response time for Google.com and that value is passed to Zabbix using the below user parameter:
UnsafeUserParameters=1
UserParameter=ping.google,C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe C:\zabbix\pinggoogle.ps1
I am calling this parameter from Zabbix using the key "ping.google"

Need to be able to Insert/Delete New Groups in openfire via HTTP or MySQL

I know how to insert a new group via MySQL, and it works, to a degree. The problem is that the database changes are not loaded into memory if you insert the group manually. Sending a HUP signal to the process does work, but it is kludgy and a hack. I desire elegance :)
What I am looking to do, if possible is to make changes (additions/deletions/changes) to a group via MySQL, and then send an HTTP request to the openfire server to read the new changes. Or in the alternative, add/delete/modify groups similar to how the User Service works.
If anyone can help I would appreciate it.
It seems to me that if sending a HUP signal works for you, then that's actually quite a simple, elegant and efficient way to get Openfire to read your new group, particularly if you do it with the following command on the Openfire server (and assuming it's running a Linux/Unix OS):
pkill -f -HUP openfire
If you still want to send an HTTP request to prompt Openfire to re-read the groups, the following Python script should do the job. It is targeted at Openfire 3.8.2, and depends on Python's mechanize library, which in Ubuntu is installed with the python-mechanize package. The script logs into the Openfire server, pulls up the Cache Summary page, selects the Group and Group Metadata Cache options, enables the submit button and then submits the form to clear those two caches.
#!/usr/bin/python
import mechanize
import cookielib
# Customize to suit your setup
of_host = 'http://openfire.server:9090'
of_user = 'admin_username'
of_pass = 'admin_password'
# Initialize browser and cookie jar
br = mechanize.Browser()
br.set_cookiejar(cookielib.LWPCookieJar())
# Log into Openfire server
br.open(of_host + '/login.jsp')
br.select_form('loginForm')
br.form['username'] = of_user
br.form['password'] = of_pass
br.submit()
# Select which cache items to clear in the Cache Summary page
# On my server, 13 is Group and 14 is Group Metadata Cache
br.open(of_host + '/system-cache.jsp')
br.select_form('cacheForm')
br.form['cacheID'] = ['13','14']
# Activate the submit button and submit the form
c = br.form.find_control('clear')
c.readonly = False
c.disabled = False
r = br.submit()
# Uncomment the following line if you want to view results
#print r.read()

How to Use RCurl or RMongo via HTTP with Authentication and Self Signed SSL to Read in JSON Data

I am using R to write a program and perform some analyses. The data is being captured by an outside vendor with MongoDB in JSON format. They are providing it to me via a URI on port 443, which they want me to query using cURL. They have authentication in place and self signed SSL.
I can authenticate and dump the data via curl in Windows, however to create a long term sustainable solution it needs to all be done within R.
The vendor says that RCurl "should" work but they aren't providing any support and they basically just don't like the idea of using RMongo and have no comment on it (but if we could make it work that would be awesome, in my opinion).
I have the following packages loaded
- ggplot2
- DBI
- rjson
- RJSONIO (I sometimes don't load this one if I'm using rjson, or visa versa)
- RMongo
- rstudio
- RCurl
The self signed certificate caused issues even with curl, but those were resolved by editing settings in Ruby and then launching a cmd shell with Ruby and using curl that way. I'm not sure if the problems in R are related.
When trying to go the RCurl route I end up with commands/errors like this:
x <- getURL("https://xxx.xx.xxx.xxx:443/db/_authenticate", userpwd="xxxx:xxxxx") }{Error in function (type, msg, asError = TRUE) : couldn't connect to host
and when trying to use RMongo I'm even more clueless...
> mongo <- mongoDbConnect("xxx.xx.xxx.xxx")
username = "xxxx"
password="xxxxxxxxxxxxx"
authenticated <- dbAuthenticate(mongo, username, password)
Feb 25, 2013 4:00:09 PM com.mongodb.DBTCPConnector fetchMaxBsonObjectSize
WARNING: Exception determining maxBSON size using0
java.io.IOException: couldn't connect to [/127.0.0.1:27017] bc:java.net.ConnectException: Connection refused: connect
at com.mongodb.DBPort.open(DBPort.java:224)
at com.mongodb.DBPort.go(DBPort.java:101)
at com.mongodb.DBPort.go(DBPort.java:82)
at com.mongodb.DBPort.findOne(DBPort.java:142)
at com.mongodb.DBPort.runCommand(DBPort.java:151)
at com.mongodb.DBTCPConnector.fetchMaxBsonObjectSize(DBTCPConnector.java:429)
at com.mongodb.DBTCPConnector.checkMaster(DBTCPConnector.java:416)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:193)
at com.mongodb.DBApiLayer$MyCollection._find(DBApiLayer.java:303)
at com.mongodb.DB.command(DB.java:159)
at com.mongodb.DB.command(DB.java:144)
at com.mongodb.DB._doauth(DB.java:503)
at com.mongodb.DB.authenticate(DB.java:440)
at rmongo.RMongo.dbAuthenticate(RMongo.scala:24)
Error in .jcall(rmongo.object#javaMongo, "Z", "dbAuthenticate", username, :
com.mongodb.MongoException$Network: can't call something
Feb 25, 2013 4:00:10 PM com.mongodb.DBPortPool gotError
WARNING: emptying DBPortPool to 127.0.0.1:27017 b/c of error
java.io.IOException: couldn't connect to [/127.0.0.1:27017] bc:java.net.ConnectException: Connection refused: connect
at com.mongodb.DBPort._open(DBPort.java:224)
at com.mongodb.DBPort.go(DBPort.java:101)
at com.mongodb.DBPort.go(DBPort.java:82)
at com.mongodb.DBPort.call(DBPort.java:72)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:202)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:303)
at com.mongodb.DB.command(DB.java:159)
at com.mongodb.DB.command(DB.java:144)
at com.mongodb.DB._doauth(DB.java:503)
at com.mongodb.DB.authenticate(DB.java:440)
at rmongo.RMongo.dbAuthenticate(RMongo.scala:24)
any help would be greatly appreciated!
I had an issue in the past with RCurl where I needed to explicitly point it toward the security certificates to get it to work okay. I ended up needing something like this:
out <- postForm("https://url.org/api/",
token="IMATOKEN",
.opts=curlOptions(cainfo="C:/path/aaa.crt"))
I had manually exported the certificate I needed to get that working.
Also, it kind of looks like you should be doing a POST request given that URI, not a GET. Try the postForm() command, maybe?
EDITED TO ADD:
Okay, I think things might be a little more clear if we stepped back a second. Is your goal to get some file from a specific URL (basically, doing a wget but from within R)? Or is your goal to submit a form that subsequently returns the data you need?
IF you are just trying to get something that is behind basic (and also fairly INSECURE) HTTP authentication, you should do two things:
Tell your data provider to use a more secure option
Use the getURL() option as shown (using the www.omegahat.org example you posted about):
Code:
getURL("http://www.omegahat.org/RCurl/testPassword/",.opts=list(userpwd="bob:welcome"))
OR
getURL("http://bob:welcome#www.omegahat.org/RCurl/testPassword/")
Now, if you need to submit a form to get the data, you would generally pass authentication tokens, etc, as parameters (so, in the example above, `token='.