the system cannot find the file specified while importing json data in couchbase bucket. - json

I apologize if my question does not deserve a standard to ask here.
I have two files(products-data.json, orders-data.json) inside the following directory:
G:\kb\Couchbase\CB121
and I imported the products-data.json successfully using the following command:
G:\kb\Couchbase\CB121>cbimport.exe json -c couchbase://127.0.0.1 -u sattar -p 156271 -b sampleDB -f lines -d file://products-data.json -t 4 -g %type%::%variety%::#MONO_INCR#
But when I try to import orders-data.json in the same way as follows:
G:\kb\Couchbase\CB121>cbimport.exe json -c couchbase://127.0.0.1 -u sattar -p 156271 -b sampleDB -f lines -d file://orders-data.json​ -t 4 -g ​%type%::%order_id%
I am getting the following error:
2018-01-21T12:01:31.211+06:00 [31mERRO: open orders-data.json​: The system cannot find the file specified.[0m[2m -- jsondata.(*Parallelizer).Execute() at source.go:198[0m
2018-01-21T12:01:31.212+06:00 [31mERRO: open orders-data.json​: The system cannot find the file specified.[0m[2m -- plan.(*data).execute() at data.go:89[0m
Json import failed: open orders-data.json​: The system cannot find the file specified.
It kills my day. Any help is appreciated. Thanks.

Related

Exporting data from MySQL docker container

I use the official MySQL docker image, and I am having difficulty exporting data from the instance without errors. I run my export like this:
docker run -it --link containername:mysql --rm mysql sh -c
'exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
-p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"
dbname'
| gz > output.sql.gz
However, this results in the warning:
"mysqldump: [Warning] Using a password on the command line interface can be insecure."
As the first line of the outputted file. Obviously this later causes problems for any other MySQL processes which are used to consume the data.
Is there any way to suppress this warning from the mysqldump client?
A little late to answer but this command saved my day.
docker exec CONTAINER /usr/bin/mysqldump -u root --password=root DATABASE > backup.sql
I realise that this is an old question, but for those stumbling across it now I put together a post about exporting and importing from mysql docker containers: https://medium.com/#tomsowerby/mysql-backup-and-restore-in-docker-fcc07137c757
It covers the "Using a password on the command line interface..." warning and how to bypass it.
Run Following command on terminal
docker exec CONTAINER_id /usr/bin/mysqldump -uusername --password=yourpassword databasename> backup.sql
Replace the
CONTAINER_id. username, yourpassword
with specific to your configuration.
To get Container Id :
docker container ls
To eliminate this exact warning you can pass password in MYSQL_PWD environment variable or use other connection method - see http://dev.mysql.com/doc/refman/5.7/en/password-security-user.html
docker run -it --link containername:mysql --rm mysql sh -c
'export MYSQL_PWD="$MYSQL_ENV_MYSQL_ROOT_PASSWORD"; exec mysqldump
-h"$MYSQL_PORT_3306_TCP_ADDR"
-P"$MYSQL_PORT_3306_TCP_PORT" -uroot
dbname'
| gz > output.sql.gz
Here's how I solved this to dump a mysql db into a file.
I created a dump-db.sh file with the content:
# dump db from docker container
(docker exec -i CONTAINER_ID mysqldump -u DB_USER -pDB_PASS DB_NAME) > FILENAME.sql
To get the CONTAINER_ID list them: docker container list
Add run permissions to the script:
chmod o+x dump-db.sh
Run it:
./dump-db.sh
Remember to replace the CONSTANTS above with your own data.
I always create bash "tools" in my repo root with which I can repeat common tasks, such as database dumps. With bash, you can also load your .env file, so your credentials are not in a file in the repo, but just in your .env file.
#!/bin/bash
# load .env
set -o allexport; . ./.env; set +o allexport
# setup
TIMESTAMP=$(date +%Y-%m-%d__%H.%M)
BACKUP_DIR="dockerfiles/db"
CONTAINER_NAME="cp-db"
# dump
docker exec $CONTAINER_NAME /usr/bin/mysqldump -u$DB_USER --password=$DB_PASSWORD $DB_NAME> $BACKUP_DIR/dump__$TIMESTAMP.sql

How to deserialize a Riak backup into a JSON?

I have just dumped a riak db (back-up). But the backup file is a binary file.
Is there a lib that it deserialize it into a human readable file (JSON w/e) ?
I haven't found anything on google, neither on Stack Overflow.
Found a solution for my current problem:
Connect to the env and then run following command:
wget https://s3-us-west-2.amazonaws.com/ps-tools/riak-data-migrator-0.2.9-bin.tar.gz
tar -xvzf riak-data-migrator-0.2.9-bin.tar.gz
cd riak-data-migrator-0.2.9
java -jar riak-data-migrator-0.2.9.jar -d -r /var/riak_export -a -h 127.0.0.1 -p 8087 -H 8098
(source: https://github.com/basho-labs/riak-data-migrator)
EDIT
Another way to export riak db https://www.npmjs.com/package/riak-bucket-exporter
#!/bin/bash
for bucket in $(curl http://localhost:8098/riak?buckets=true | sed -e 's/[{}:"]//gi' -e 's/buckets\[//' -e 's/\]//' -e 's/,/ /g')
do
echo "Exporting bucket $bucket"
rm -f $bucket.json
riak-bucket-exporter -H localhost -p 8098 $bucket
done
echo "Export done"
As all the suggestions listed so far appear to be broken in one way or another (at least for me and riak-kv#2.x), I ultimately resorted to homegrow a bash shell script that leverages riak-kv's HTTP API with no other prerequisites than curl and jq to accomplish an export of sorts.
It can be found in this gist here: https://gist.github.com/cueedee/0b26ec746c4ef578cd98e93c93d2b6e8 hoping that someone will find it useful.

Mongoimport on Mac fails with variable in file path

I'm trying to import a json file into mongo. When I import the file with this line, it works:
mongoimport -d reps_development -c users --jsonArray --file ~/reps/scripts/mockUserData.json
The script uses an environment variables $REPS_ROOT, which is set in my .bash_profile. This line fails:
mongoimport -d reps_development -c users --jsonArray --file $REPS_ROOT/scripts/mockUserData.json
I set $REPS_ROOT with the following command:
export REPS_ROOT="~/reps"
Any thoughts on why this isn't working? The error I get is:
file doesn't exist: ~/reps/scripts/mockUserData.json
Bash expands $REPS_ROOT to ~/reps, which becomes the value. Bash won't expand that value again. If the value of that variable contains a relative path, you need to make sure it is expanded. For example:
mongoimport -d reps_development -c users --jsonArray --file $(cd $REPS_ROOT; pwd)/scripts/mockUserData.json
If you are on Linux, you can use $(readlink -f $REPS_ROOT) instead. Alternatively you can use $HOME instead of ~:
export REPS_ROOT="$HOME/reps"

Creating custom DVD for RHEL 7 with kickstart

I am trying to create a custom CD/DVD to deploy RHEL 7 with kickstart file. Here is what I did:
Edited isolinux.cfg (in the ISOLinux folder) and grub.cfg file (in the EFI\BOOT folder).
Created ISO using mkisofs.
But it is not working. Am I using correct files/method?
Edit the ISO image and put the ks.cfg file that you have created.
Preferably, put the ks.cfg file inside ks directory. More information can be found here.
You need to use the new command. Here is an example of what will work:
Add the kickstart file to your download and exploded ISO.
Run this command in the area with the ISO and kickstart and point to another location to build the ISO:
genisoimage -r -v -V "OEL6 with KS for OVM Manager" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o OEL6U6_OVM_Manager.iso /var/www/html/Template/ISO/
I found the way to create custom DVD from the RHEL7 page.
Mount the downloaded image
mount -t iso9660 -o loop path/to/image.iso /mnt/iso
Create a working directory - a directory where you want to place the contents of the ISO image.
mkdir /tmp/ISO
Copy all contents of the mounted image to your new working directory. Make sure to use the -p option to preserve file and directory permissions and ownership.
cp -pRf /mnt/iso /tmp/ISO
Unmount the image.
umount /mnt/iso
Make sure your current working directory is the top-level directory of the extracted ISO image - e.g. /tmp/ISO/iso. Create the new ISO image using genisoimage:
genisoimage -U -r -v -T -J -joliet-long -V "RHEL-7.1 Server.x86_64" -Volset "RHEL-7.1 Server.x86_64" -A "RHEL-7.1 Server.x86_64" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .
Hope the answer will helpful:
I am editing my answer due to the comments posted. Here is a more comprehensive solution:
(A) You need to create the ISO properly. I found helpful information in this URL.
Here is the line that I actually ended up with, for my MBR/UEFI ISO creation:
mkisofs -U -A "<Volume Header>" -V "RHEL-7.1 x86_64" -volset "RHEL-7.1 x86_64" -J -joliet-long -r -v -T -x ./lost+found -o ${OUTPUT}/${HOST}.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -boot-load-size 18755 /dir/where/sources/for/ISO/are/located
Be careful with the -V parameter, as it has to match what the kernel has defined for inst.stage2. In the default grub.conf included in the boot disk, it is configured to be "hd:LABEL=RHEL-7.1\x20x86_64" which matches with the settings above.
(B) You need the correct setup for EFI for RHEL7. For some reason, this has changed from RHEL6, where you could just use the /EFI/BOOT/BOOTX64.conf. Now it uses the /EFI/BOOT/grub.cfg. Common wisdom from Red Hat Manuals state to add the inst.ks= parameter to the kernel line. The grub.cfg that comes in the /EFI/BOOT directory of the RHEL7 boot iso actually has the linuxefi parameter, instead of the kernel one, I would guess they would work the same. If you are including the KS file on the CD, this should get you there.
Good Luck!

Tshark - Export packet info from pcap to cvs

I am trying to programmatically capture a stream of packets by using Tshark. The simplified terminal command I am using is:
tshark -i 2 -w output.pcap
This is pretty straightforward, but I then need to get a .csv file in order to easily analyze the information captured.
By opening the .pcap file in Wireshark and exporting it in .csv what I get is a file structured as follows:
"No.","Time","Source","Destination","Protocol","Length","Info"
but,again, I need to do this in an automatic way. So I tried using the command:
tshark -r output.pcap -T fields -e frame.number -e ip.src -e ip.dst -e frame.len -e frame.time -e frame.time_relative -E header=y -E separator=, > output.csv
but I can not find anywhere the name of the "Info" field I get when manually exporting the .csv.
Any ideas? Thanks!
Yes, you can if you use the latest Development Release.
See Wireshark Bug 2892.
Download the Development Release Version 1.9.0.
Use the following command:
$ tshark -i 2 -T fields -e frame.time -e col.Info
Output
Feb 28, 2013 20:58:24.604635000 Who has 10.10.128.203? Tell 10.10.128.1
Feb 28, 2013 20:58:24.678963000 Who has 10.10.128.163? Tell 10.10.128.1
Note
-e col.Info,
Use capital I
How about directly exporting the packets to a csv file.
sudo tshark > fileName.csv