AWS CLI Command - json

I'm trying to execute the following command using AWS CLI command -
aws s3 cp s3://my_bucket/folder/file_1234.txt -| pipe to sed command | pipe to jq command | aws s3 cp - s3://my_bucket/new_folder/final_file.txt
The above code is working fine - basically pulling data from s3, doing some operations and pushing it back to s3.
Now, I have some files in s3 that have a pattern - for instance - file_771.txt, file_772.txt, file_773.txt and so on.
Now in order to get all the files that match the pattern I'm doing the following operation which is not working as expected. Its generating an empty output file in s3.
aws s3 cp --include file_77* s3://my_bucket/folder/ -| pipe to sed command | pipe to jq command | aws s3 cp - s3://my_bucket/new_folder/final_file.txt
This code is generating empty final_file.txt. Any reason ? Am I missing something in the code ?

To copy multiple files at once, you would have to use --recursive, in your case with --exclude "*" --include "file_77*", but:
Downloading as a stream is not currently compatible with the
--recursive parameter
cp

Related

Is it possible to specify a file name inside .zip archive compressing standard input to standard output using cli tools?

I'm trying to find a way to specify a file name in a compressed .zip archive while compressing it from standard input to standard output. I want to achieve it without creating a temporal file during a process.
Currently I have an example script which creates mysqldump, passes the result as input to a zip command and outputs the stream to aws s3 command to save a result to S3.
Here is the example:
mysqldump ... | zip | aws s3 cp - s3://[bucket_name]/output.sql.zip
The problem is zip by default saves a file inside zip archive with name "-".
Maybe there is a way how to pass specific file name inside an archive using zip command or any other zip library?
Recent Linux distributions have a streamzip script which I wrote to handle this usecase. In the example below the -member commandline option sets the member within the streamed zipfile to data.sql
mysqldump ... | streamzip -member data.sql | aws s3 cp - s3://[bucket_name]/output.sql.zip
If the mysqldump is larger than 4G, include the -zip64 option with streamzip.
If you don't have streamzip available download from https://github.com/pmqs/IO-Compress/blob/master/bin/streamzip

how to expand output in GitLab CI/CD job?

I've setup a job that run some PowerShell commands. One them returns JSON object.
however when I open Job log I see only part of the object. How I can see the full object?
{#{productNo=1; onTarget=f944fb79-b39f-4936-b0b6-8eef3c802014; name=asdffgh-as…
Write the output to a file, then store the file as an artifact:
script:
- your_command | Out-File -FilePath output.json
artifacts:
paths:
- output.json
See Using Out-File, and Job artifacts.

How to download logs created by ECS container to make them look the old-fashined way remove JSON?

My good old application creates log caught by AWS Cloudwatch logs
However it is ugly to read them trapped inside JSON. Can I get them in a raw form?
Install jq (a C++ application without dependency hell) from your favourite package repository (or from GitHub).
Download the logs and parse them with
#profile=...
#lgn=...
aws --profile $profile logs get-log-events --log-stream-name
$lsn --log-group-name /$lgn | jq --raw-output '.events[] | .message'

script file has to change property names in aws s3

I have more than 1000 files in aws s3 bucket in different folders, all the files are json files only, these json files have 30 properties, now I have to change the name of 2 properties (Ex: code to httpCode and time to responseTime). Can we write a script file which can change these property names in all files
Note: You should run this command without -i switch in sed command just to verify that you are getting desired results. -i will make changes in the file. If you are getting desired results then only put -i switch.
// Get the files from s3 bucket
aws s3 sync s3://mybucket .
find . -iname "*.json" -type f -exec sed -i 's/code/httpCode/g;s/time/responseTime/g' {} \;
// sync the files with s3 from current local directory
aws s3 sync . s3://mybucket
ps: this is untested.

How to move from gitalb source base to gitlab omnibus?

I am trying to move gitlab-ce 8.5 source base to gitlab-ce 8.15 omnibus. We were using MySQL in source base but now we have to use thepsql with gitlab-ce omnibus`. When I was trying to take a backup so it was failing due to some empty repo.
Question: Is it any alternative way to move source base to omnibus with full backup?
I have moved gitlab from source base to the omnibus. You can use below link to convert db dump from MySQL to psql.
https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/mysql_to_postgresql.md
I have created a zip file of repos manually & copied to the gitlab omnibus server & restore it on /var/opt/gitlab/git-data/repository/.
After these steps, copy the below script on /var/opt/gitlab/git-data/xyz.sh & executed for updating the hooks.
#!/bin/bash
for i in repositories/* ; do
if [ -d "$i" ]; then
for o in $i/* ; do
if [ -d "$i" ]; then
rm "$o/hooks"
# change the paths if required
ln -s "/opt/gitlab/embedded/service/gitlab-shell/hooks" /var/opt/gitlab/git-data/"$o"/hooks
echo "HOOKS CHANGED ($i/$o)"
fi
done
fi
done
Note: Repos permission should be git:git
Some useful commands during the migration:
sudo gitlab-ctl start postgres **to start the Postgres service only**
sudo gitlab-psql **to use the gitlab bundle postgres.**
Feel free to comment if you face 5xx errors code on gitlab page.