how do i add client-secret.json to my heroku app? [duplicate] - json

I'm building a rails app that pulls data from Google Analytics using the Google Api Client Library for Ruby.
I'm using OAuth2 and can get everything working in development on a local machine. My issue is that the library uses a downloaded file, client_secrets.json, to store two secret keys.
Problem:I'm using Heroku and need a way to get the file to their production servers.
I don't want to add this file to my github repo as the project is public.
If there is a way to temporarily add the file to git, push to Heroku, and remove from git that would be fine. My sense is that the keys will be in the commits and very hard to prevent from showing on github.
Tried:
As far I can tell you cannot SCP a file to Heroku via a Bash console. I believe when doing this you get a new Dyno and anything you add would be only be temporary. I tried this but couldn't get SCP to work properly, so not 100% sure about this.
Tried:
I looked at storing the JSON file in an Environment or Config Var, but couldn't get it to work. This seems like the best way to go if anyone has a thought. I often run into trouble when ruby converts JSON into a string or hash, so possibly I just need guidance here.
Tried:
Additionally I've tried to figure out a way to just pull out the keys from the JSON file, put them into Config Vars, and add the JSON file to git. I can't figure out a way to put ENV["KEY"] in a JSON file though.
Example Code
The Google library has a method that loads the JSON file to create an authorization client. The client then fetches a token (or gives a authorization url).
client_secrets = Google::APIClient::ClientSecrets.load('client_secrets.json')
auth_client = client_secrets.to_authorization
** note that the example on google page doesn't show a filename because it uses a default ENV Var thats been set to a path
I figure this would all be a lot easier if the ClientSecrets.load() method would just take JSON, a string or a hash which could go into a Config Var.
Unfortunately it always seems to want a file path. When I feed it JSON, a string or hash, it blows up. I've seen someone get around the issue with a p12 key here, but I'm not sure how to replicate that in my situation.
Haven't Tried:
My only other though (aside from moving to AWS) is to put the JSON file on AWS and have rails pull it when needed. I'm not sure if this can be done on the fly or if the file would need to be pulled down when the rails server boots up. Seems like too much work, but at this point I've spend a few hours on it so ready to attempt.
This is the specific controller I am working on:
https://github.com/dladowitz/slapafy/blob/master/app/controllers/welcome_controller.rb

By search github I found that someone had used a different method that used a JSON string as an argument rather than a file path: Google::APIClient::ClientSecrets.new(JSON.parse(ENV['GOOGLE_CLIENT_SECRETS']))
This lets me wrap up the JSON into an ENV VAR. The world makes sense again.

As discussed in this thread, rather than supplying a path to a json key file you can set three ENV variables instead:
GOOGLE_ACCOUNT_TYPE=service_account
GOOGLE_PRIVATE_KEY=XXX
GOOGLE_CLIENT_EMAIL=XXX
Source here.

I ran into this same problem using Google API. I ended up using openssl to assign a new very secret passphrase to the p12 file, storing that new file in the repo, and then putting the passphrase into app secrets and on Heroku env variables.
This way, the file is in the repo but it can't be accessed/read without the passphrase.
This post was helpful in changing the default google p12 passphrase from 'notasecret' to something secure.
def authorize!
#client.authorization = Signet::OAuth2::Client.new(
#...
:signing_key => key
)
end
def key
Google::APIClient::KeyUtils.load_from_pkcs12(key_path, ENV.fetch('P12_PASSPHRASE'))
end
def key_path
"#{Rails.root}/config/google_key.p12"
end

Using Rails 7, I encrypted the JSON credentials like so
I first ran bin/rails credentials:edit -e development
Then added my credentials:
omniauth:
google_oauth2:
client_secrets: {"web":{"client_id":"my-client-id","project_id":"my-project-id","auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://oauth2.googleapis.com/token","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs","client_secret":"my-client-secret","redirect_uris":["http://localhost:3000/users/auth/google_oauth2/callback","http://localhost:3000/contacts/gmail/callback"]}}
The I used it with ClientSecrets like so:
def client_secrets
#client_secrets ||= Google::APIClient::ClientSecrets.new(client_secrets_json)
end
def client_secrets_json
Rails.application.credentials.dig(:omniauth, :google_oauth2, :client_secrets)
end

Related

Passing a config file to Google Cloud Function using GitHub Actions and GitHub Secrets

I have a config.py file on my development machine that stores multiple dictionaries for my config settings.
config.py:
SERVICE_ONE_CREDENTIALS = {
"CLIENT_ID": 000,
"CLIENT_SECRET": "abc123"
}
SERVICE_TWO_CREDENTIALS = {
"VERIFY_TOKEN": "abc123"
}
...
I recently setup this GitHub action to automatically deploy the changes that are pushed to the repository into a Google Cloud Function, and ran into a problem when trying to copy this configuration file over since the file is being ignore from git due to it storing sensitive credentials.
I've been trying to find a way to copy this file over to the Cloud Function but haven't been successful. I would prefer to stay away from using environment variables due to the number of keys there are. I did look into using key management services, but I first wanted to see if it would be possible to store the file in GitHub Secrets and pass it along to the function.
As a backup, I did consider encrypting the config file, adding it to the git repo, and storing the decryption key in GitHub secrets. With that, I could decrypt the file in the Cloud Function before starting the app workflow. This doesn't seem like a great idea though, but would be interested to see if anyone has done this or what your thoughts are on this.
Is something like this possible?
If you encrypt and put in a repo at least it's not clear text and someone can't get to the secret without a private key (which of course you don't check in). I do something similar in my dotfiles repo where I check in dat files with my secrets and the private key isn't checked in. That would have to be a secret in actions and written to disk to be used. It's a bit of machinery but possible.
Using github secrets is a secure path because you don't check-in anything, it's securely stored and we pass it JIT if it's referenced. dislosure, I work on actions.
One consideration with that is we redact secrets on the fly from the logs but it's done one line at a time. multiline secrets are not good.
So a couple of options ...
You can manage the actual secret (abc123) as a secret and echo the config file to a file with the secret. As you noted you have to manage each secret separately. IMHO, it's not a big deal since abc123 is actually the secret. I would probably lean into that path.
Another option is to base64 encode the config file, store that as a secret in github actions and echo base64 decoded to a file. Don't worry, base64 isn't a security mechanism here. It's a transport to get it to a single line and if it accidentally leaks into the logs (the command line you run) the base64 version of it (which could easily be decoded) will be redacted from the logs.
There's likely other options but hope that helped.

Custom dynamic inventory scripts/plugins in Ansible

Ansible allows devs
to write programs (in any language) that will return JSON describing the dynamic “snapshot” of current hosts. I’m using vSphere, which is currently not supported by Ansible OSS, and so I need to write such a "custom inventory plugin".
I can handle the querying of vSphere for a list of hosts, as well as constructing the JSON that is compatible with what Ansible is expecting.
Where the documentation completely (seemingly) falls flat is:
How do I “connect” Ansible with my inventory app? That is, say my inventory app is a simple bash script (inventory.sh)..how do I configure Ansible to call bash inventory.sh and obtain JSON from it? In reality the app will likely be a Java executable (inventory.jar) but I figure that if I can figure out how to get it working with bash, I can extrapolate to Java; and
How does Ansible actually capture/fetch the JSON back from the app? STDOUT? Is this all supposed to happen over an HTTP connection? Examples? How does inventory.sh or inventory.jar communicate that JSON back to Ansible?
The inventory script has to be located on the same machine where Ansible runs. It is not communicating through http, Ansible will simply parse the STDOUT of your program. The location does not matter at all, you have to pass the path to Ansible when you call Ansible:
ansible-playbook ... -i /path/to/your/inventory.sh
To avoid passing the inventory location every time you could add this to you ansible.cfg:
inventory = /path/to/your/inventory.sh
You could also copy the script to /etc/ansible/hosts, which is the default location Ansible will look for inventory files/scripts, but I prefer to keep things together so I suggest to place it close to your playbooks/roles etc.
And (3) Is any of this documented, anywhere? Don't see anything in the Ansible docs...
It is not mentioned on the page Developing Dynamic Inventory Sources but it is to be seen on some examples on the page Dynamic Inventory. The docs are community managed and from times litte unstructured and lacking important information.
BTW, there is a VMware inventory script included. By looking at the source I have seen it imports some vSphere stuff. I have little experience with VMware so I can't judge if this is actually what you need and don't need to write your own.
This is completely user defined. Typically you would write your dynamic inventory in Python and use a json dump of the output to create the inventory.
Here is an example for the use case you mentioned (vSphere): https://github.com/RaymiiOrg/ansible-vmware/blob/master/query.py
In a nutshell you create it like a normal Python file and create the options (as he does in main) and selectively execute functions based on which options are passed. These will make REST calls and return the output in the form of a JSON dump, which Ansible can parse for use in inventory.

Getting a path of accessed script in Dart

The aim is to create a config file for server-side Dart application, which can be imported as needed into scripts like so:
import 'Config/config.dart';
What is crucial for this config script however, is that it knows it's own location when being accessed with an import (not the location of the file accessing it). Currently it uses this line below to find the path:
final String ROOT_DIR = dirname(Platform.script.toFilePath());
However, this returns the file path of the file importing it and not the file that is being imported. This needs to be known purely for working out a relative path to the root folder, which will allow other absolute paths to be known in the config file (such as routes, controllers, and things), like so:
final String PUBLIC_DIR = join(ROOT_DIR, 'Public');
final String VIEWS_DIR = join(ROOT_DIR, 'Views');
What would be the best way of approaching this? I have seen this post: Get script path in Dart (analog __DIR__ constant in PHP) which is the same sort of situation, however I can't see a clean way of using relative paths to find the route folder.
Probably missing something really obvious, but can't see it at the moment. Any help would be much appreciated, thank you for reading.
This is not supported in Dart.
Maybe Mezoni found some kind of trick to get this information which he packed in his caller_info package https://stackoverflow.com/a/24880092/217408.
You can't rely on the path where the files are stored during development.
Currently it is only experimental but when you run dart2dart on your code, all or many parts of the code are inlined.

How to store response of rails app to filesystem

I would like to generate a bunch of static pages with my rails app, which then will be stored to the filesystem in order to be part of a build step in a yeoman webapp.
The filetypes would be JSON and HMTL.
Therefore I would like to know what would be the best solution for this problem. Fetching the site with Nokogiri or something similar, transform it to string and put it into a file. Or maybe write a rake task, which starts curl which then puts it into a file.
Or is there something build-in which can handle this type of problem?
Update:
I guess I have to make my goal clearer: I would like to build a website generator, which can export webpages and json to the local file system. In order to get fast response times and to use my existing build process I would like to generate those files and not serve them via rails.
Not sure what you're trying to do, but you might consider something like middleman. It's meant to generate static files, but you can still do ruby/rails like magic, etc.
http://middlemanapp.com/

How can I add file locations to a database after they are uploaded using a Perl CGI script?

I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.