How to deserialize a Riak backup into a JSON? - json

I have just dumped a riak db (back-up). But the backup file is a binary file.
Is there a lib that it deserialize it into a human readable file (JSON w/e) ?
I haven't found anything on google, neither on Stack Overflow.

Found a solution for my current problem:
Connect to the env and then run following command:
wget https://s3-us-west-2.amazonaws.com/ps-tools/riak-data-migrator-0.2.9-bin.tar.gz
tar -xvzf riak-data-migrator-0.2.9-bin.tar.gz
cd riak-data-migrator-0.2.9
java -jar riak-data-migrator-0.2.9.jar -d -r /var/riak_export -a -h 127.0.0.1 -p 8087 -H 8098
(source: https://github.com/basho-labs/riak-data-migrator)
EDIT
Another way to export riak db https://www.npmjs.com/package/riak-bucket-exporter
#!/bin/bash
for bucket in $(curl http://localhost:8098/riak?buckets=true | sed -e 's/[{}:"]//gi' -e 's/buckets\[//' -e 's/\]//' -e 's/,/ /g')
do
echo "Exporting bucket $bucket"
rm -f $bucket.json
riak-bucket-exporter -H localhost -p 8098 $bucket
done
echo "Export done"

As all the suggestions listed so far appear to be broken in one way or another (at least for me and riak-kv#2.x), I ultimately resorted to homegrow a bash shell script that leverages riak-kv's HTTP API with no other prerequisites than curl and jq to accomplish an export of sorts.
It can be found in this gist here: https://gist.github.com/cueedee/0b26ec746c4ef578cd98e93c93d2b6e8 hoping that someone will find it useful.

Related

Pass flags to the Sphinx runner?

So I've got the following project OpenFHE-development and when I run the build process, there are lots of warnings. However, most of these warnings are fine to ignore (we vet them before pushing to the main branch)
Specifically, is there a way to take
pth/python -m sphinx -T -E -b readthedocssinglehtmllocalmedia -d _build/doctrees -D language=en . _build/localmedia
and convert it to
pth/python -m sphinx -T -E -b readthedocssinglehtmllocalmedia -d _build/doctrees -D language=en . _build/localmedia 2> errors.txt
(pipe the stderr to a file instead of having it display on stdout)?
Does not seem to be possible at the moment. See git discussion

Switch user on Google Compute Engine Startup Script

I pass the following as my GCE startup script but it always logs in as the root user and never as the demo-user. How do I fix it?
let startupScript = `#!/bin/bash
su demo-user
WHO_AM_I=$(whoami)
echo WHO_AM_I: $WHO_AM_I &>> debug.txt
cd..`
I think it should work like that:
#! /bin/bash
sudo -u demo-user bash -c 'WHO_AM_I=$(whoami);
echo WHO_AM_I; $WHO_AM_I &>> debug.txt;'
use "sudo-u" to specify the user, then bash -c 'with all the commands between these particular quotes '' and separated by ;
For example: bash -c 'command1; command2;'
You can try an easier test (it worked for me), for example:
#! /bin/bash
sudo -u demo-user bash -c 'touch test.txt'
And then check with ls -l /home/demo-test/text.txt that demo-test is the owner of the new file.

the system cannot find the file specified while importing json data in couchbase bucket.

I apologize if my question does not deserve a standard to ask here.
I have two files(products-data.json, orders-data.json) inside the following directory:
G:\kb\Couchbase\CB121
and I imported the products-data.json successfully using the following command:
G:\kb\Couchbase\CB121>cbimport.exe json -c couchbase://127.0.0.1 -u sattar -p 156271 -b sampleDB -f lines -d file://products-data.json -t 4 -g %type%::%variety%::#MONO_INCR#
But when I try to import orders-data.json in the same way as follows:
G:\kb\Couchbase\CB121>cbimport.exe json -c couchbase://127.0.0.1 -u sattar -p 156271 -b sampleDB -f lines -d file://orders-data.json​ -t 4 -g ​%type%::%order_id%
I am getting the following error:
2018-01-21T12:01:31.211+06:00 [31mERRO: open orders-data.json​: The system cannot find the file specified.[0m[2m -- jsondata.(*Parallelizer).Execute() at source.go:198[0m
2018-01-21T12:01:31.212+06:00 [31mERRO: open orders-data.json​: The system cannot find the file specified.[0m[2m -- plan.(*data).execute() at data.go:89[0m
Json import failed: open orders-data.json​: The system cannot find the file specified.
It kills my day. Any help is appreciated. Thanks.

Mysql Auto Backup on ubuntu server

After months of trying to get this to happen I found a shell script that will get the job done.
Heres the code I'm working with
#!/bin/bash
### MySQL Server Login Info ###
MUSER="root"
MPASS="MYSQL-ROOT-PASSWORD"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
BAK="/backup/mysql"
GZIP="$(which gzip)"
### FTP SERVER Login info ###
FTPU="FTP-SERVER-USER-NAME"
FTPP="FTP-SERVER-PASSWORD"
FTPS="FTP-SERVER-IP-ADDRESS"
NOW=$(date +"%d-%m-%Y")
### See comments below ###
### [ ! -d $BAK ] && mkdir -p $BAK || /bin/rm -f $BAK/* ###
[ ! -d "$BAK" ] && mkdir -p "$BAK"
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
FILE=$BAK/$db.$NOW-$(date +"%T").gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
lftp -u $FTPU,$FTPP -e "mkdir /mysql/$NOW;cd /mysql/$NOW; mput /backup/mysql/*; quit" $FTPS
Everything is running great, however there are a few things I'd like to fix but am clueless when it comes to shell scripts. I'm not asking anyone to write it. Just some pointers. First of all the /backup/mysql directory on my server stacks the files everytime it backs up. Not to big of a deal. But after a year of nightly backups it might get a little full. So id like it to clear that directory after uploading. Also I don't want to overload my hosting service with files so id like it to clear the remote servers dir before uploading. Lastly I would like it to upload to a subdirectory on the remote server such as /mysql
Why reinvent the wheel? You can just use Debian's automysqlbackup package (should be available on Ubuntu as well).
As for cleaning old files the following command might be of help:
find /mysql -type f -mtime +16 -delete
Uploading to remote server can be done using scp(1) command;
To avoid password prompt read about SSH public key authentication
Take a look at Backup, it allows you to model your backup jobs using a Ruby DSL, very powerful.
Support multiple DBs and most popular online storages, and lots of cool features.

Tshark - Export packet info from pcap to cvs

I am trying to programmatically capture a stream of packets by using Tshark. The simplified terminal command I am using is:
tshark -i 2 -w output.pcap
This is pretty straightforward, but I then need to get a .csv file in order to easily analyze the information captured.
By opening the .pcap file in Wireshark and exporting it in .csv what I get is a file structured as follows:
"No.","Time","Source","Destination","Protocol","Length","Info"
but,again, I need to do this in an automatic way. So I tried using the command:
tshark -r output.pcap -T fields -e frame.number -e ip.src -e ip.dst -e frame.len -e frame.time -e frame.time_relative -E header=y -E separator=, > output.csv
but I can not find anywhere the name of the "Info" field I get when manually exporting the .csv.
Any ideas? Thanks!
Yes, you can if you use the latest Development Release.
See Wireshark Bug 2892.
Download the Development Release Version 1.9.0.
Use the following command:
$ tshark -i 2 -T fields -e frame.time -e col.Info
Output
Feb 28, 2013 20:58:24.604635000 Who has 10.10.128.203? Tell 10.10.128.1
Feb 28, 2013 20:58:24.678963000 Who has 10.10.128.163? Tell 10.10.128.1
Note
-e col.Info,
Use capital I
How about directly exporting the packets to a csv file.
sudo tshark > fileName.csv