How do I disable the creation of the artifact-[build-no].json file? - json

When I build a maven project from GitHub using Cloud Build (resulting in jar files in a bucket) I get an extra file uploaded to my bucket that specifies what files have been built (artifacts-[build-no].json). The file has a unique name for every build, so the bucket gets filled up with loads of unwanted files. Is there a way to disable the creation of that file?

I think the json is only generated when using the artifacts flag, such as:
artifacts:
objects:
location: 'gs://$PROJECT_ID/'
paths: ['hello']
You could manually push to a bucket in a step with the gsutil cloud builder, without the helper syntax. This would avoid the json creation.
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gsutil
# Upload it into a GCS bucket.
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'gopath/bin/hello', 'gs://$PROJECT_ID/hello']

Related

How do I inject a large configuration file into AWS Elastic Beanstalk?

You can add files into .ebextension/ alongside the .config file but I want to keep it outside the build itself and saved configurations don't allow extra files. Should I upload the config to s3 separately?

How can I get read-the-docs to ignore paths to local files that are accessed within a repo?

I have built a python module to access internal data files that can be accessed on multiple systems as we have mirrors of our data release. I use this config.py file to help identify all the paths. Many of the scripts include accessing this path info but I don't see a reason why readthedocs needs to build it. How can I get it to ignore these paths?
There are many other modules that do other things with the data and I have found read-the-docs to be a nice reference for new users. Unfortunately, my readthedocs builds have been failing for ages as a result of trying to find some of the local files.
https://readthedocs.org/projects/hetdex-api/builds/18207723/
FileNotFoundError: [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/hetdex-api/checkouts/latest/docs/hdr3/survey/amp_flag.fits'

##[error]Error: NO JSON file matched with specific pattern: **/appsettings.json

I'm trying to deploy an Azure App Service using Azure Devops.
I'm using the Task Azure App Service deploy version 4.*
I started noticing the following error in the log recently with the deployment failing (saw it first on 24th September)
Applying JSON variable substitution for **/appsettings.json
##[error]Error: NO JSON file matched with specific pattern: **/appsettings.json.
In the pipeline I use the task Extract files to extract *.zip, then use the result to search for **/appsettings.json.
The same task was running fine till a few days ago.
I tried redeploying an old release which was successful earlier, but it failed now with the same above error.
I double checked, there was no changes done in the pipeline recently for this to break.
How can I fix this.
Turns out my issue was not with the task Azure App Service deploy, but with the task Extract Files.
A rough look on my pipeline is as below:
Before the fix
Extract files
Deploy Azure App Service
The JSON variable substitution failed because the Extract files task was not able to find *.zip files in the root folder and hence extracted nothing. So, there was no appsettings.json file in the folder structure at all.
The Fix
Update the Extract files task search pattern as **/*.zip
Now my pipeline looks like below.
Extract files
Deploy Azure App Service
It now works fine for me.

JSON file to create release pipeline and add Agent Job and tasks to deploy Web App on Azure App Service Deploy

How can I code a JSON file to create release pipeline to create Agent job with following tasks
1) Download Build Artifacts task
2) Azure App Service Deploy task
3) File Transform Task
4) Azure SQL SqlTask
1) You can create one release with such configuration via UI, then export it. It will generate and install a file with JSON code to your local. Then you could check its scripts by yourself:
2) Or change to History tab after you create the release via UI. In History, you can also view its JSON code.
The configuration structure you want is not suitable to share here directly. So I'd recommend you the above steps to configure JSON file by yourself.
If above does not satisfied or not convenient to you, I may consider to share full JSON code here to you:-)

Open a CSV file from S3 using Roo on Heroku

I'm putting together a rake task to grab a CSV file stored on S3 and do some processing on it. I use Roo to process the files. The product is running on Heroku so I do not have local storage for physical files.
The CSV processing I am running works perfectly from the website if I upload the file.
rake task looks like this if I upload just the contents of the file:
desc "Checks uploads bucket on S3 and processes for imports"
task s3_import: [:environment] do
# depends on the environemtn task to load dev/prod env first
# get an s3 connection
s3 = AWS::S3.new(access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'])
# get the uploads bucket
bucket = s3.buckets[ENV['UPLOAD_BUCKET']]
# check each object in the bucket
bucket.objects.each do |obj|
#uri = obj.url_for(:read)
#puts uri
puts obj.key
file = obj.read
User.import(file)
end
end
I have also tried passing just the url to the object but this does not work either.
Basically, when uploading through the page this works to open the files:
Roo::Csv.new(file.path)
So, I can get the URL of the file, and I can also get the contents of the file into a variable, but I can't work out how to get Roo to open it without a physical file on disk.