How to convert .db .sqlite3 file to hash/json using Ruby/RubyOnRails - json

Does anyone know of any way of reading/converting .db .sqlite3 file into hash, json, csv on the fly in ruby/rails?
I know I could openit with the sqlite3 gem or using pg, but ideally I want to do it on the fly without fiddling with the db connections. The file will be provided by the end user, so I want to open it (convert to something I can work on such as hash) and read bits of data I need from it, then map it and save to our db.
Thanks in advance

Related

How to upload XML to MySQL in react with axios and nodejs

I am trying to upload a XML file to MySQl server.
I have a React web app, I am using Axios and NodeJS.
I was using the follwing statement to import the xml file to the product table directly from the workbench
LOAD XML INFILE "C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/products.xml" INTO TABLE product ROWS IDENTIFIED BY <Product>;
It worked fine.
Now I want to have a button that will upload anew xml file and replace the existing data in the table.
What I tried so far is using the HTML input file element, grabing the file from the event.target.files[0] and sending the file object to the server with a POST request.
I am not realy sure how to go from here I cant find a statement that can import the data out of the file object and imoprt it into the sql table.
any ideas? what is the best way to go about it?
I figured out my problem, my site was deployed to Heroku.
Apparently the clearDB, Heroku's add-on sql database, does not allow the use of LOAD XML INFILE / LOAD DATA INFILE as said here - https://getsatisfaction.com/cleardb/topics/load-data-local-infile.
What I ended up doing was converting the xml file to JS object.
That solution presented a new problem, my xml file was around 3MB summed up to over 12000 rows to insert to the database.
MySQL does not allow inserting more then a 1000 rows in a single query.
I had to split the object to several chuncks and loop through them uploading each one by it self.
This process takes some time to execute and I am sure there are better ways of doing it.
If anyone can shed some light on how best to go about it or provide an alternative I would apprreciate it.

Export from Couchbase to CSV file

I have a Couchbase Cluster with only one node (let's call it localhost) and I need to export all the data from a very big bucket (let's call it XXX) into a CSV file.
Now this seems to be a pretty easy task but I can't find the way to make it work.
According to the (really bad) documentation on the cbtransfer toold from Couchbase http://docs.couchbase.com/admin/admin/CLI/cbtransfer_tool.html they say this is possible but they don't explain it clearly. They just add a flag if you want the transfer to occur in csv format (?) but it is not working. Maybe someone who already did this can give me a hand?
Using the documentation I've been able to make an approach to the result I want to obtain (a clean CSV file with all the documents in the XXX bucket) using this command:
/opt/couchbase/bin/cbtransfer http://localhost:8091 /path/to/export/output.csv -b XXX
But what I get is that /path/to/export/output.csv is actually a folder with a lot of folders inside and it is storing some kind of json metadata that can be used to restore the XXX bucket in another instance of Couchbase.
Has anyone been able to export data from a Couchbase bucket (Json documents) into a CSV file?
From looking at the documentation, you have to put a slightly different syntax to export to a CSV. http://docs.couchbase.com/admin/admin/CLI/cbtransfer_tool.html
It needs to look like so:
cbtransfer http://[localhost]:8091 csv:./data.csv -b default -u Administrator -p password
Notice the "csv:" before the name of the csv file.
I tested this and it does export a CSV. Just be forwarned that you need a relatively flat document structure for this to work really well, as JSON can represent far more complex data structures than CSV obviously, e.g. arrays, sub-documents, etc. cbtransfer will not unravel those. For example, if there is a subdocument, cbtransfer will represent it as a JSON doc in the line of each CSV.
So depending on what your document structure is, exporting to CSV is not an ideal format. It is a step backwards.

how to save JSON data into SDCard phoneGap

We have sqLite Table.Now We converted db data into JSON Object.We need JSON data save to my mobile Sd_Card.Any file like .txt or excel or word any type of format not problem.
We need JSON data Save to Local phone-memory or SD-Card.guide me how to save.First Tell me it's possible or not
Well, all you are asking is if it is possible, so that's easy, yes. You said "phone memory" or SD-Card. I assume you mean persistent memory, like with WebSQL. Since JSON is just a string, you can insert it into a WebSQL table just fine. You could also, more easier, store it in LocalStorage. Finally, if you want to use the file system, that's simple too, just be sure to add the File plugin.

How can I add file locations to a database after they are uploaded using a Perl CGI script?

I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.

mysql db image convert to file

Hi i am writing a converter from Oracle to mysql
In Oracle the images are stored in db.
I want to read the content of the image and save to file system
I suppose that i have to read the blob entry and using php file commands create the file (am i right)
What about image type. Should i save as jpg (what if the store image is not jpg)
Any suggestion are welcome
you can write the blob directly to a file on disk. you can exclude the file extension from the name if you don't have that information somewhere in the db or the app. you could also deduce the content type by using the unix file command if you really need to assign an extension.