How to create cblite2 file with python script - couchbase

I was trying to use couchbase in one of my android applications. This app will have static data that are scraped from the web. So I want to generate a cblite2 file with my python script and insert those data and then use this cblite2 file in android. I can load data from an existing file according to this. But how can I generate my initial cblite2 file?

You could use the cblite command line tool to create the database. There are a couple of ways to do the import. I'll describe what seems to me like the simplest way.
Have your script save the JSON documents to a directory. For this example, let's call the diretory json-documents. Use the desired document ID as the base name, and .json as the extension. For example, if you want a document's ID to be "foo", the filename would be foo.json.
Then use the cblite cp command to import the documents into a new database:
cblite cp json-documents/ myNewDatabase.cblite2
You can use the other cblite subcommands to verify the import was successful.
List the document IDs:
cblite ls myNewDatabase.cblite2
Display the contents of a document:
cblite cat myNewDatabase.cblite2 someDocumentId

Related

Can I use input (file) to select a video AND load it’s corresponding json (with same root name)?

I have a project with two files:
example.mp4
example.json
These files are in the same path. I need to select only the example.mp4 file and automatically load the corresponding example.json file automatically.
I’m not sure if the window.URL.createObjectURL will identify the final directory etc..
Many Thanks.

Converting a directory content into a json file

I have some files with .txt extension (in directory A) and I need to make a json file for this specific directory (A) from a different directory (B) in which the name and location of all the files should be included.
Do you know how I can do it with a bash script? (Then I want to use this json file for my data analysis).
I'm unsure how to do this entirely via a bash script, but if you have knowledge of Python, you could use the included os and json modules to list all the files in the directory, add them to a dictionary. Then create the JSON module using json.dumps(). Once you have written the script, you can invoke it via the bash shell.

How can I pass multiple CSV files in a directory with same column headers to single REST API in JMETER and test with 1000 users

Test scenario: The folder contains multiple CSVs. Columns are same in all the CSVs.I have to pass multiple csv files one after the other to the single REST API (GET CALL).
Each user (Total 1000 users) should get assigned a set of records/rows from csv file currently in use.
I am new to the JMeter and finding a solution using the CSV Data Set Config. And I realize I could not pass multiple csv files using this.
I also see that __CSVRead() function but I could not pass dynamically the csv file using BeanShell scripting.
Can someone please help me with this?
The CSV file names from the folder can be read one by one using Directory Listing Config plugin
Depending on the CSV file nature you might want to use either __CSVRead() or __StringFromFile() functions directly in your HTTP Request sampler, you don't need to go for any scripting.

ArangoDB: How to export collection to CSV?

I have noticed there is a feature in web interface of ArangoDB which allows users to Download or Upload data as JSON file. However, I find nothing similar for CSV exporting. How can an existing Arango DB collection be exported to a .csv file?
If you want to export data from ArangoDB to CSV, then you should use Arangoexport. It is included in the full packages as well as the client-only packages. You find it next to the arangod server executable.
Basic usage:
https://docs.arangodb.com/3.4/Manual/Programs/Arangoexport/Examples.html#export-csv
Also see the CSV example with AQL query:
https://docs.arangodb.com/3.4/Manual/Programs/Arangoexport/Examples.html#export-via-aql-query
Using an AQL query for a CSV export allows you to transform the data if desired, e.g. to concatenate an array to a string or unpack nested objects. If you don't do that, then the JSON serialization of arrays/objects will be exported (which may or may not be what you want).
The default Arango install includes the following file:
/usr/share/arangodb3/js/contrib/CSV_export/CSVexport.js
It includes this comment:
// This is a generic CSV exporter for collections.
//
// Usage: Run with arangosh like this:
// arangosh --javascript.execute <CollName> [ <Field1> <Field2> ... ]
Unfortunately, at least in my experience, that usage tip is incorrect. Arango team, if you are reading this, please correct the file or correct my understanding.
Here's how I got it to work:
arangosh --javascript.execute "/usr/share/arangodb3/js/contrib/CSV_export/CSVexport.js" "<CollectionName>"
Please specify a password:
Then it sends the CSV data to stdout. (If you with to send it to a file, you have to deal with the password prompt in some way.)

Make a searchable volume in mysql

I need to put the contents of a Volume in a mysql database, to be searchable via a web interface.
To get all the files/folders, I can do:
$ cd /Volumes/myVolume
$ find ./
Which will give me all I need to know.
If my mysql table only has one column called path, what would be the most efficient way to write all the paths to the table, given there are 1M+ paths.
Pipe the output of the script above to a file and then import the file.
Run the following:
find ./ > directorylisting.txt
Open the file and see how to import it into MySQL using one of the many import functions available. The link daniph mentioned in the comment on your question has some links. You can use the mysqlimport or LOAD DATA INFILE statement to load this file into the table. Index it properly and you should be well away.