Passing a config file to Google Cloud Function using GitHub Actions and GitHub Secrets - google-cloud-functions

I have a config.py file on my development machine that stores multiple dictionaries for my config settings.
config.py:
SERVICE_ONE_CREDENTIALS = {
"CLIENT_ID": 000,
"CLIENT_SECRET": "abc123"
}
SERVICE_TWO_CREDENTIALS = {
"VERIFY_TOKEN": "abc123"
}
...
I recently setup this GitHub action to automatically deploy the changes that are pushed to the repository into a Google Cloud Function, and ran into a problem when trying to copy this configuration file over since the file is being ignore from git due to it storing sensitive credentials.
I've been trying to find a way to copy this file over to the Cloud Function but haven't been successful. I would prefer to stay away from using environment variables due to the number of keys there are. I did look into using key management services, but I first wanted to see if it would be possible to store the file in GitHub Secrets and pass it along to the function.
As a backup, I did consider encrypting the config file, adding it to the git repo, and storing the decryption key in GitHub secrets. With that, I could decrypt the file in the Cloud Function before starting the app workflow. This doesn't seem like a great idea though, but would be interested to see if anyone has done this or what your thoughts are on this.
Is something like this possible?

If you encrypt and put in a repo at least it's not clear text and someone can't get to the secret without a private key (which of course you don't check in). I do something similar in my dotfiles repo where I check in dat files with my secrets and the private key isn't checked in. That would have to be a secret in actions and written to disk to be used. It's a bit of machinery but possible.
Using github secrets is a secure path because you don't check-in anything, it's securely stored and we pass it JIT if it's referenced. dislosure, I work on actions.
One consideration with that is we redact secrets on the fly from the logs but it's done one line at a time. multiline secrets are not good.
So a couple of options ...
You can manage the actual secret (abc123) as a secret and echo the config file to a file with the secret. As you noted you have to manage each secret separately. IMHO, it's not a big deal since abc123 is actually the secret. I would probably lean into that path.
Another option is to base64 encode the config file, store that as a secret in github actions and echo base64 decoded to a file. Don't worry, base64 isn't a security mechanism here. It's a transport to get it to a single line and if it accidentally leaks into the logs (the command line you run) the base64 version of it (which could easily be decoded) will be redacted from the logs.
There's likely other options but hope that helped.

Related

How do I manage updating "static" data in an Angular app without rebuilding the entire app again?

I have an Angular app where I am loading some static data from a file under assets.
urlDataFile = './assets/data.json';
data: any = null;
constructor(
private _http: HttpClient
) {
this.loadData().subscribe((res) => {
this.data = res;
});
}
private loadData() {
return this._http.get(this.urlDataFile);
}
This works absolutely fine for me for truly static data.
When I build my app for distribution, the data gets packaged into the app.
However, once deployed, I want to be able to publish an updated data file - just (manually) updating a single file into the deployment location.
In an ideal world, I would like to develop with a dummy or sample data file (to be held in source control etc.), to exclude that file from deployment, and once deployed, to push a new data file into the deployed app.
What is the standard convention for accomplishing this?
What you have there should work just fine?
There's two ways of doing JSON stuff; one is as you're doing and dynamically request the file, the other is to literally import dataJson from './assets/data.json' in your file directly.
The second option, I was surprised to find out, actually gets compiled into your code so that the json values are literally part of your, e.g. main.js, app files.
So, yours is good for not being a part of your app, it will request that file on every app load (or whenever you tell it to).
Means it will load your local debug file because that's what it's got, and then the prod file when deployed; it's just requesting a file, after all.
What I foresee you needing to contend with is two things:
Live updates
Unless your app keeps requesting the file periodically, it won't magically get new data from any new file that you push. Until/unless someone F5's or freshly browses to the site, you otherwise won't get that new data.
Caching
Even if you occasionally check that file for new data, you need to handle the fact browsers try to be nice and cache files for you to make things quicker. I guess this would be handled with various cache headers and things that I know exist but have never had to touch in detail myself.
Otherwise, the browser will just return the old, cached data.json instead of actually going and retrieving the new one.
After that, though, I can't see anything wrong with doing what you're doing.
Slap your file request in an interval and put no-cache headers on the file itself and... good enough, probably?
You are already following the right convention, which is to call the data with the HTTP client, not importing the file.
Now you can just gitignore the file, and replace it in a deployment step or whatever suits you.
Just watch for caching. You might want to add a dummy query string with some value based on a time or something to ensure the server sends a new file, based on how often you might update this file.

how do i add client-secret.json to my heroku app? [duplicate]

I'm building a rails app that pulls data from Google Analytics using the Google Api Client Library for Ruby.
I'm using OAuth2 and can get everything working in development on a local machine. My issue is that the library uses a downloaded file, client_secrets.json, to store two secret keys.
Problem:I'm using Heroku and need a way to get the file to their production servers.
I don't want to add this file to my github repo as the project is public.
If there is a way to temporarily add the file to git, push to Heroku, and remove from git that would be fine. My sense is that the keys will be in the commits and very hard to prevent from showing on github.
Tried:
As far I can tell you cannot SCP a file to Heroku via a Bash console. I believe when doing this you get a new Dyno and anything you add would be only be temporary. I tried this but couldn't get SCP to work properly, so not 100% sure about this.
Tried:
I looked at storing the JSON file in an Environment or Config Var, but couldn't get it to work. This seems like the best way to go if anyone has a thought. I often run into trouble when ruby converts JSON into a string or hash, so possibly I just need guidance here.
Tried:
Additionally I've tried to figure out a way to just pull out the keys from the JSON file, put them into Config Vars, and add the JSON file to git. I can't figure out a way to put ENV["KEY"] in a JSON file though.
Example Code
The Google library has a method that loads the JSON file to create an authorization client. The client then fetches a token (or gives a authorization url).
client_secrets = Google::APIClient::ClientSecrets.load('client_secrets.json')
auth_client = client_secrets.to_authorization
** note that the example on google page doesn't show a filename because it uses a default ENV Var thats been set to a path
I figure this would all be a lot easier if the ClientSecrets.load() method would just take JSON, a string or a hash which could go into a Config Var.
Unfortunately it always seems to want a file path. When I feed it JSON, a string or hash, it blows up. I've seen someone get around the issue with a p12 key here, but I'm not sure how to replicate that in my situation.
Haven't Tried:
My only other though (aside from moving to AWS) is to put the JSON file on AWS and have rails pull it when needed. I'm not sure if this can be done on the fly or if the file would need to be pulled down when the rails server boots up. Seems like too much work, but at this point I've spend a few hours on it so ready to attempt.
This is the specific controller I am working on:
https://github.com/dladowitz/slapafy/blob/master/app/controllers/welcome_controller.rb
By search github I found that someone had used a different method that used a JSON string as an argument rather than a file path: Google::APIClient::ClientSecrets.new(JSON.parse(ENV['GOOGLE_CLIENT_SECRETS']))
This lets me wrap up the JSON into an ENV VAR. The world makes sense again.
As discussed in this thread, rather than supplying a path to a json key file you can set three ENV variables instead:
GOOGLE_ACCOUNT_TYPE=service_account
GOOGLE_PRIVATE_KEY=XXX
GOOGLE_CLIENT_EMAIL=XXX
Source here.
I ran into this same problem using Google API. I ended up using openssl to assign a new very secret passphrase to the p12 file, storing that new file in the repo, and then putting the passphrase into app secrets and on Heroku env variables.
This way, the file is in the repo but it can't be accessed/read without the passphrase.
This post was helpful in changing the default google p12 passphrase from 'notasecret' to something secure.
def authorize!
#client.authorization = Signet::OAuth2::Client.new(
#...
:signing_key => key
)
end
def key
Google::APIClient::KeyUtils.load_from_pkcs12(key_path, ENV.fetch('P12_PASSPHRASE'))
end
def key_path
"#{Rails.root}/config/google_key.p12"
end
Using Rails 7, I encrypted the JSON credentials like so
I first ran bin/rails credentials:edit -e development
Then added my credentials:
omniauth:
google_oauth2:
client_secrets: {"web":{"client_id":"my-client-id","project_id":"my-project-id","auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://oauth2.googleapis.com/token","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs","client_secret":"my-client-secret","redirect_uris":["http://localhost:3000/users/auth/google_oauth2/callback","http://localhost:3000/contacts/gmail/callback"]}}
The I used it with ClientSecrets like so:
def client_secrets
#client_secrets ||= Google::APIClient::ClientSecrets.new(client_secrets_json)
end
def client_secrets_json
Rails.application.credentials.dig(:omniauth, :google_oauth2, :client_secrets)
end

Does this JSON configuration workflow have a name?

I have a system where we collect a lot of JSON configuration from different parties to configure our overall service.
The repository looks like a directory of formatted JSON files. For example foo.json:
{
"id": "3bd0e397-d8cc-46ff-9e0d-26fa078a37f3",
"name": "Example",
"logo": "https://example/foo.png"
}
We have a pipeline whereby the owner of foo.json can overwrite this file by committing a new file at any time, since fast updates are required.
However we require unfortunately to skip whole files or override some values for various $reasons.
Hence we commit something like touch foo.json.skip when we want the file to be skipped before publishing. And similarly, we have a foo.json.d/override.json to perhaps override the logo because it's poorly formatted or something.
Is there a name for this sort of JSON pipeline that we have? It's inspired by systemd configuration, but maybe system configuration was inspired by something else?

Upload files under same CID to IPFS

I'm looking to create an NFT project with 10k pieces, each piece should be made available as soon as the token was minted, therefore I want to call upload the JSON object to IPFS under the same hash as I've seen in other projects.
This means that when the item was minted a new file will be uploaded to:
ipfs://<CID>/1
the seconds minting will create token 2 and then a new file will be uploaded to
ipfs://<CID>/2
How is it possible to be done with ipfs or pinata api?
Wrap it into a .car Web 3 Storage How to Work With Car Files
Update: I just reread the last part of the question.
I found this here (https://docs.pinata.cloud/api-pinning/pin-file) :
wrapWithDirectory - Wrap your content inside of a directory when adding to IPFS. This allows users to retrieve content via a filename instead of just a hash. For a more detailed explanation, see this informative blogpost. Valid options are: true or false
I'm pretty sure that you can do this with ipfs add /PATH/TO/CONTENT/* -w
I'm still exploring with IPFS, but this sounds like what you are looking for.

How can i get the path of file?

:image => StorageRoom::Image.new_with_filename(path)
I have to get the path of the image. So far i have specified the path manually and it worked and now i have put in heroku but it shows Load Error - No such file present.
How can i get the path value of the local system using browse button.
Your problem may not be related to path names, but to the fact that Heroku has a read-only file system. If you try to write files onto disk in a Heroku app, it simply doesn't work -- the file will not be saved.
The exception is the "temp" directory. You can save files there, but they are not guaranteed to persist for longer than the duration of a single request.
Is the file you are trying to open actually saved in your Git repo? If so, it will be on the disk in your Heroku app, and you should be able to open it.
To see what the filesystem layout looks like on your Heroku instance, you can create a controller method like:
render :inline => Dir['**/*'].inspect
File.expand_path
Reference : http://saaridev.blogspot.com/2006/11/ruby-finding-absolute-path-of-running.html
You don't need the full path. As far as file path in the client machine is concerned for file uploads, the path is irrelevant as it poses security risks for the user.
Most modern browsers don't send the file path for file uploads. You could get the path using Javascript or Flash but still I don't see the logic behind doing this.
When a user clicks on the submit button the browser should at least send you the file name with the file data together with a bunch of other information like the mime type. Your web server would either write the file to disk or process it in memory assuming you have near infinite memory resources. Look at the RFC 1867 for file uploads for more on this.