couchdb file to json - json

I have a CouchDB file and I'd like to convert it to json data.
So far everything I found on the internet has been about dumping the server database into a json file.
I tried to put the file in CouchDB\var\lib\couchdb after installing the server, but I couldn't find a guide on how to do it properly.

In order to load a .couch file you need to make sure you are loading it into the same version of CouchDB that your .couch file was created with. I'm not sure how to determine that based on the .couch file, but perhaps you can figure it out based on the source of the .couch file. Then you need to copy it to /var/lib/couchdb and make sure it has the same owner and permissions as any other .couch files in there. Probably just doing
chown couchdb:couchdb *.couch
will be enough. Then restart couchdb, probably by doing:
sudo /etc/init.d/couchdb restart
And you should have it loaded into your couchdb instance. At this point you can then use one of the dump to json approaches that you have mentioned.

Related

How do I modify the defaul graphdb.home directory?

I have installed GraphDB Free v9.3 in LinuxMint 19.3.
The workbench is running fine though I haven't created any repositories yet. This is because I have noticed that although the application is installed at /opt/graphdb-free, the data, conf and log files are in a hidden folder below my home folder: /home/ianpiper/.graphdb/conf (etc).
I would prefer to store these folders on a separate volume, mounted at /mnt/bigdata. In the documentation it suggests that I can set graphdb.home using the graphdb.properties file (though I don't seem to have such a file in my installation) or in the startup script. I think this script might be /opt/graphdb-free/app/bin/setvars.in.sh, and that I could use this to change
-Dgraphdb.home=""
to
-Dgraphdb.home="/mnt/bigdata"
Could a knowledgeable person advise as to whether my understanding is correct, and if so what the best way is to change the location of graphdb.home?
Thanks,
Ian.

Apache Cassandra Unable to load a csql file

I'm just starting out with Apache Cassandra. I have some csql files that define my data. I have got Cassandra installed on my machine and I did start it as per Apache Cassandra Wiki. Nothing suspicious!
I'm using the CLI to create the namespaces and the tables for which I have some cql files in a specific directory like:
create_tables.cql
load_tables.cql
I was able to successfully do the create_tables.cql, but when I tried to urn the load_tables.cql, I always end up seeing:
/Users/myUser/data/load-test-data.cql:7:Can't open 'test_data.csv' for reading: [Errno 2] No such file or directory: 'test_data.csv'
The load_tables.cql refers to another csv file that contains the test data that I want to populate my database with!
COPY test_table (id, name) FROM 'test_data.csv';
I tried doing al sort of permissions to the data folder where the cql files are, but still I keep getting this message. Any hints as to what I could do to get this solved?
Ok I got this one sorted! It has got to do with the absolute and relative paths. I ended up using an absolute path to where my CSV is located! This solved the issue!

How do you open a remote sqlite database over http?

Is it possible to open an sqlite file over http? I only need to read the db, and was hoping I could do something like:
var dbFile:File = new File("http://10.1.1.50/project/db.sqlite");
sqlConnection.open(dbFile);
Error #3125: Unable to open the database file.', details:'Connection closed.', operation:'open', detailID:'1001'
My situation calls for several apps compiled for various devices to share this file, which is served locally via wamp.
Zip your sqlite file from db.sqlite to db.zip. Load this zip file in flex using URLLoader and unzip it back in flex.
If not, you can also rename the file's extension to .xml, load it using httpservice or urlloader and once you get the result, you can rename the file's name back to .sqlite and start querying the file and it will work just fine.
There is no way you can achieve this over HTTP.
SqLite is a file and not a service/process that may be accessible via any port.
The best case scenario is when you have network access to the computer where the sqlite file is stored, like:
\\myserver\databases\mysqlitefile.db
...but this may work only on windows :(
You can adapt your code to use modsqlite http://modsqlite.sourceforge.net/#using
there's an apache module to allow remote sqlite access via http.
http://modsqlite.sourceforge.net/

Correct PHP file upload permissions

I have developed a download/upload manager script.
When I upload a file via POST method it is stored in a folder called files, the files folder is within another folder called download-manager.
Now it seems when I upload via the POST method 0666 CHMOD works when I want to rename, delete the file but the download-manager folder and the files folder need to be 0777 CHMOD for this to work. Now can someone tell me if this is dangerous?
1) I got a deny all in .htaccess so nobody can access the files directory via a browser
2) the upload script is protected by a username and password which the person who uses the script will obviously change, so only admins can basically upload, rename, edit, delete files and the records in the MySQL database.
When a file is uploaded a record is added to the database with information like file type, file name, file size etc and then the unique id (auto incremented by MySQL) is appended to the process.php file which gets the file from the directory and mime type etc that is not revealed, the process.php basically does the checks to see if record and files exists and if so forces the download of that file.
Basically the download URL is like: wwww.mydomain.com/process.php?file=57, a check is done to obviously make sure that id exists in the database and that a file exists with the file name stored in the database with that id.
Now all this works fine when uploading the file via a form using POST method but I also added a manual upload so for people who want to upload a file that is larger than the size their webhost allows they can simply upload the file via a FTP program for example and then just add the filename and file details manually themselves via a form in the admin area to link the record with the file. The problem is then a permission issue because if the file is uploaded via FTP or whatever way they upload the file by the php script cannot rename, delete the file if needed in the future as the php script does not have the correct privileges. So from what I gather, the only option is then telling the persons who use the script to change the file chmod to 0777 for it to work, i think that will make it work?
But then I have the problem of 0777 also being executable. The script allows any file type upload as it's a download/upload manager script but at the same time I am slightly confused with all this permissions lark and what I should actually be doing. As php is limited by the max upload size set by a host I want to add manual upload so users can upload the file by another method and assign the file to the database record but then as stated I get a problem when wanting to rename, delete the file via the php script.
I have developed the script to detect such problems and notify the user etc but I would like to try and make this script do all the leg work or nearly all of it without having to state in the manual that the admin will have to chmod the file to 0777 when they want the script to rename, delete the file, although I don't know if just chmodding the file to 0777 will actually allow the php script to the rename, delete it and so forth but also security is then a concern.
UPDATED
Ok thanks so chown the file before chmodding it on upload?
Do i just use chown() around the file and nothing else and that will make it owned by the server process and make it private? as i see you got
chown apache:apache '/path/to/files' ;
Do I need to add the apache:apache bit?
I did think of this as simpler solution, if a admin does a manual upload tell them they will have to rename/delete the file manually if needed in the future because the script won't have the correct permissions to do so, this would then make this a easy solution, as the manualupload script can just rename the db record to keep it linked to the file. That way no worries of file permission issues.
Simply put user changes file manually via ftp for example from myfile.zip to somefile.zip then they edit the db record for that file and change the filename to somefile.zip from the old filename myfile.zip, that way everything is linked still but no worries about permission issues. As I also have been reading that chown() does not always work or cannot be relied on for whatever reason.
1) i got a deny all in .htaccess so nobody can access the files directory via a browser
Store your files in a separate folder, away from the directory structure that houses your PHP files.
As far as the permissions on the directory are concerned, there are three ways to go about setting up the permissions on the folder:
Make it world-writable (chmod 0777 '/path/to/files/')
This is not recommended, as it has major security implications especially on a non-dedicated server; anyone who has an account or can tell a process on the server to write/delete to that folder will be able to change its contents.
Make it temporary (chmod 1777 '/path/to/files/')
This also carries a security concern, but less so than option 1 for the following reason: users cannot modify the directory--only the files they own.
Make it owned by the server process and make it private (chown apache:apache '/path/to/files' ; chmod 0700 '/path/to/files')
This is arguably the best solution.
Just relax & enjoy.
On many shared hostings it's the only possible solution anyway.
There is another option - to ask a user for ftp pass and use ftp for copying files from tmp, like wordpress does. But I think it's even less secure.

How to copy the contents of an FTP directory to a shared network path?

I have the need to copy the entire contents of a directory on a FTP location onto a shared networked location. FTP Task has you specify the exact file name (not a directory) and File System Task does not allow accessing a FTP location.
EDIT: I ended up writing a script task.
Nothing like reviving a really old thread... but there is a solution to this.
To copy the all files from a directory then specify your remote path to be /[directory name]/*
Or for just files and not directories /[directory name]/.
Or specific file types; /[directory name]/*.csv
I've had some similar issues with the FTP task before. In my case, the file names changed based on the date and some other criteria. I ended up using a Script Task to perform the FTP operation.
It looks like this is what you ended up doing as well. I'd be curious if anyone else can come up with a better way to use the FTP task. It's nice to have...but VERY limited.
When I need to do this sort of thing I use a batch file to call FTP on the command line and use the mget command. Then I call the batch from the DTS/DTSX package.