OpenGrok) How can I use '--symlink' command in OpenGrok? - opengrok

I'm not sure how to use the --symlink command in OpenGrok, so I'm asking.
OpenGrok's source root folder is '/opengrok/src'.
In this folder, I created a symbolic link file with the following command.
ln -s /home/A/workspace/tmp tmp
And I did indexing with the following command.
java -Djava.util.logging.config.file=/opengrok/etc/logging.properties -jar /opengrok/dist/lib/opengrok.jar -c /usr/local/bin/ctags -s /opengrok/src -d /opengrok/data -P -S -W /opengrok/etc/configuration.xml --symlink /opengrok/src/tmp -U http://localhost:8080/source
When I connect to localhost/source, the tmp file is displayed, but when I click it, the files in tmp are not displayed and the following error message is displayed.
Error: File not found!
The requested resource is not available.
Resource lacks history info. Was remote SCM side up when indexing occurred? Cleanup history cache dir(or just the .gz for the file or db record) and rerun indexer making sure remote side will respond during indexing.
How can I access and view the files in tmp using OpenGrok?

Related

Iterating through multiple subdirectories to load large batch data

newbie to coding and am making a legislative database for use in academic work. I have downloaded the California legislative information into a directory on a partitioned portion of my HD. Loaded the schema to the MySQL DB with no issues, downloaded the data and am having problems getting it uploaded. Lets call my workspace home directory home, within that directory are my modules (I have node in there but I would love to avoid using it until I make an app), my json package and settings files and a subdirectory called pubinfo. This is all set up.
Within the pubinfo directory are my sql table files, and shell commands for loading the data into mysql where I have a DB with tables ready for data insertion, as well as subdirectories for legislative sessions labeled from 2001-2019 by sessions (2001, 2003, and so on by 2 years). The loadData.sh file is below, and the instructions from the California data website said to download these files, unzip them, then to run them on my pubinfo directory...
if [ $# -gt 0 ]; then
echo Usage: .loadData.sh
exit 1
fi
if [ -z "$MYSQL_PWD" ]; then
read -p "Please enter root password:" MYSQL_PWD
export MYSQL_PWD=${MYSQL_PWD}
fi
do
if [ -e ${lcTable}.dat ]; then
echo Processing table: ${lcTable}
if [ -z "$MYSQL_PWD" ]; then
mysql -uroot -p -Dcapublic -v -v -f < ${lcTable}.sql 2>&1 > ${lcTable}.log
else
mysql -uroot -Dcapublic -v -v -f < ${lcTable}.sql 2>&1 > ${lcTable}.log
fi
fi
done < "tables_lc.lst"
When ran, the out put on my zsh terminal is '/usr/local/bin/loadData.sh: line 29: location_code_tbl.sql: No such file or directory', I also have to add that I symlinked the shell file into my path so that the variable could be called in a global setting. I plan to eliminate it once this is all uploaded. I suppose I could symlink all the sql tables as well, but I know there has to be an easier way to iterate through subdirectories while using the sql tables and files in my main directory. I just am not familiar with zsh or bash, I had to take an Udemy course just to set up the MySQL DB. Anyways, I was hoping someone would be able to help, if you have any questions that I did not address here I can answer. Oh and if there is any question on my machine, it is a newer Mac book pro, running the most current mysql version and my editor is visual studio code in addition to the good old terminal.
Thanks!

OwnCloud: How to synchronyze the FileSystem with the DB

I have to "insert" a lot of files into an owncloud server (8.2).
A user give me a USB key with the files and tell me to copy of all them into his owncloud data files repository.
Do you know if is it possible ?
Is it possible to synchronyze the ownCloud data fileSystem with the ownCloud database?
My environment is Linux CentOS7 (Apache 2.4, mySQL 5.6, php 5.6)
Thanks,
owncloud brings a command line utility that allows to manually trigger some tasks. Among those is the files:scan function which re-scans a users file system.
So you can import those files by following these steps:
1. you copy the files into the physical file system of the user(s) inside ownclouds data folder
2. you fire the command line utility to re-scan the files. That takes care to update the database according to the files found.
This is an example for the manual trigger:
sudo -u www-data php occ files:scan <user name>
Here <user name> obviously has to be replaced. Also the account name the sudo command switches to depends on the linux distribution and its setup. The command has to be started inside ownclouds base folder. THe command can be called in a loop with different user names, that can be done by means of standard scripting.
Here is a documentation of the utility: https://doc.owncloud.org/server/8.0/admin_manual/configuration_server/occ_command.html
I just made a try myself using an owncloud-8.2 installation and succeeded.
Before I could sucessfully scan my files again as arkascha explained, I needed to change the ownder and the group of the new folder to www-data (for Debian OS - others see OC-Docu 1) and set rights of the new directory to 755
Change ownder:
sudo chown -R www-data:www-data <path>
Change rights:
sudo chmod 755 <path>
whwere is the path to the newly added directory and could for example look like this example: /media/hdd/owncloud/data/<username>/files/<newFolderName>
OC-Docu:
https://doc.owncloud.org/server/9.0/admin_manual/configuration_server/occ_command.html

Cannot delete files on docker host

I'm using the following shell script to extract my databases in the entrypoint and startup the container.
#!/bin/bash
if [ ! -d "/var/lib/mysql/assetmanager" ]; then
tar -zxvf mysql.tar.gz
fi
exec /usr/bin/mysqld_safe
On startup I mount a local directory to the /var/lib/mysql directory with the -v parameter and extract then the files with the above script.
But now I can't delete the extracted files on my host, because permission denied error.
Can someone help me with this problem.
Thx
You cannot delete them because by default process in container executed by root user and extracted files belong to root. if you don't need these files in mapped dir, use different location for it -v ...:/myassets and in script:
if [ ! -d "/var/lib/mysql/assetmanager" ]; then
tar -zxvf /myassets/mysql.tar.gz
fi
you also could map a single file instead of whole directory if you need only that file.
There are many other solutions, depends what you need:
you could delete these files as root: sudo rm ...
you could delete them in container before exit
you could create user in container and create files from this user

How to make automatic backups of mysql db´s on Goddady with Apache servers

Im trying to automate a daily backup of mysql database on a shared hosting with Godaddy.com using Apache servers.
For this I researched and found out about bash scripts.
Goddady hosting lets me do cron jobs also so I did the following:
My bash script looks something like this (I masked the sensible data only):
<br>
#/bin/sh<p></p>
<p>mysqldump -h myhost-u myuser -pMypassword databasename > dbbackup.sql<br>
gzip dbbackup.sql<br>
mv dbbackup.sql.gz _db_backups/`date +mysql-BACKUP.sql-%y-%m-%d.gz`<br>
</p>
I configured the cron job which points to this file and executes it every 24 hours.
I have the cron job utility configured to send me a log message to my email every time it runs.
And this is the log message:
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
1: br: No such file or directory
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
3: p: No such file or directory
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
4: br: No such file or directory
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
5: br: No such file or directory
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
6: /p: No such file or directory
Its like it doesn't understand the language. Should I edit my .htaccess file for this?
Any ideas?
Remove those html tags from the bash script, error messages are all related to them . Your script should be as the following.
#!/bin/sh
mysqldump -h myhost-u myuser -pMypassword databasename > dbbackup.sql
rm -rf dbbackup.sql.gz
gzip dbbackup.sql
mv dbbackup.sql.gz _db_backups/`date +mysql-BACKUP.sql-%y-%m-%d.gz`

How can I get around MySQL Errcode 13 with SELECT INTO OUTFILE?

I am trying to dump the contents of a table to a csv file using a MySQL SELECT INTO OUTFILE statement. If I do:
SELECT column1, column2
INTO OUTFILE 'outfile.csv'
FIELDS TERMINATED BY ','
FROM table_name;
outfile.csv will be created on the server in the same directory this database's files are stored in.
However, when I change my query to:
SELECT column1, column2
INTO OUTFILE '/data/outfile.csv'
FIELDS TERMINATED BY ','
FROM table_name;
I get:
ERROR 1 (HY000): Can't create/write to file '/data/outfile.csv' (Errcode: 13)
Errcode 13 is a permissions error, but I get it even if I change ownership of /data to mysql:mysql and give it 777 permissions. MySQL is running as user "mysql".
Strangely I can create the file in /tmp, just not in any other directory I've tried, even with permissions set such that user mysql should be able to write to the directory.
This is MySQL 5.0.75 running on Ubuntu.
Which particular version of Ubuntu is this and is this Ubuntu Server Edition?
Recent Ubuntu Server Editions (such as 10.04) ship with AppArmor and MySQL's profile might be in enforcing mode by default. You can check this by executing sudo aa-status like so:
# sudo aa-status
5 profiles are loaded.
5 profiles are in enforce mode.
/usr/lib/connman/scripts/dhclient-script
/sbin/dhclient3
/usr/sbin/tcpdump
/usr/lib/NetworkManager/nm-dhcp-client.action
/usr/sbin/mysqld
0 profiles are in complain mode.
1 processes have profiles defined.
1 processes are in enforce mode :
/usr/sbin/mysqld (1089)
0 processes are in complain mode.
If mysqld is included in enforce mode, then it is the one probably denying the write. Entries would also be written in /var/log/messages when AppArmor blocks the writes/accesses. What you can do is edit /etc/apparmor.d/usr.sbin.mysqld and add /data/ and /data/* near the bottom like so:
...
/usr/sbin/mysqld {
...
/var/log/mysql/ r,
/var/log/mysql/* rw,
/var/run/mysqld/mysqld.pid w,
/var/run/mysqld/mysqld.sock w,
**/data/ r,
/data/* rw,**
}
And then make AppArmor reload the profiles.
# sudo /etc/init.d/apparmor reload
WARNING: the change above will allow MySQL to read and write to the /data directory. We hope you've already considered the security implications of this.
Ubuntu uses AppArmor and that is whats preventing you from accessing /data/. Fedora uses selinux and that would prevent this on a RHEL/Fedora/CentOS machine.
To modify AppArmor to allow MySQL to access /data/ do the follow:
sudo gedit /etc/apparmor.d/usr.sbin.mysqld
add this line anywhere in the list of directories:
/data/ rw,
then do a :
sudo /etc/init.d/apparmor restart
Another option is to disable AppArmor for mysql altogether, this is NOT RECOMMENDED:
sudo mv /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/
Don't forget to restart apparmor:
sudo /etc/init.d/apparmor restart
I know you said that you tried already setting permissions to 777, but as I have an evidence that for me it was a permission issue I'm posting what I exactly run hoping it can help. Here is my experience:
tmp $ pwd
/Users/username/tmp
tmp $ mkdir bkptest
tmp $ mysqldump -u root -T bkptest bkptest
mysqldump: Got error: 1: Can't create/write to file '/Users/username/tmp/bkptest/people.txt' (Errcode: 13) when executing 'SELECT INTO OUTFILE'
tmp $ chmod a+rwx bkptest/
tmp $ mysqldump -u root -T bkptest bkptest
tmp $ ls bkptest/
people.sql people.txt
tmp $
MySQL is getting stupid here. It tries to create files under /tmp/data/.... So what you can do is the following:
mkdir /tmp/data
mount --bind /data /tmp/data
Then try your query. This worked for me after hours of debugging the issue.
You can do this :
mysql -u USERNAME --password=PASSWORD --database=DATABASE --execute='SELECT `FIELD`, `FIELD` FROM `TABLE` LIMIT 0, 10000 ' -X > file.xml
This problem has been bothering me for a long time. I noticed that this discussion does not point out the solution on RHEL/Fecora. I am using RHEL and I do not find the configuration files corresponding to AppArmer on Ubuntu, but I solved my problem by making EVERY directory in the directory PATH readable and accessible by mysql. For example, if you create a directory /tmp, the following two commands make SELECT INTO OUTFILE able to output the .sql AND .sql file
chown mysql:mysql /tmp
chmod a+rx /tmp
If you create a directory in your home directory /home/tom, you must do this for both /home and /home/tom.
Some things to try:
is the secure_file_priv system variable set? If it is, all files must be written to that directory.
ensure that the file does not exist - MySQL will only create new files, not overwrite existing ones.
I have same problem and I fixed this issue by following steps:
Operating system : ubuntu 12.04
lamp installed
suppose your directory to save output file is : /var/www/csv/
Execute following command on terminal and edit this file using gedit editor to add your directory to output file.
sudo gedit /etc/apparmor.d/usr.sbin.mysqld
now file would be opened in editor please add your directory there
/var/www/csv/* rw,
likewise I have added in my file, as following given image :
Execute next command to restart services :
sudo /etc/init.d/apparmor restart
For example I execute following query into phpmyadmin query builder to output data in csv file
SELECT colName1, colName2,colName3
INTO OUTFILE '/var/www/csv/OUTFILE.csv'
FIELDS TERMINATED BY ','
FROM tableName;
It successfully done and write all rows with selected columns into OUTPUT.csv file...
In my case, the solution was to make every directory in the directory path readable and accessible by mysql (chmod a+rx). The directory was still specified by its relative path in the command line.
chmod a+rx /tmp
chmod a+rx /tmp/migration
etc.
I just ran into this same problem. My issue was the directory that I was trying to dump into didn't have write permission for the mysqld process. The initial sql dump would write out but the write of the csv/txt file would fail. Looks like the sql dump runs as the current user and the conversion to csv/txt is run as the user that is running mysqld. So the directory needs write permissions for both users.
You need to provide an absolute path, not a relative path.
Provide the full path to the /data directory you are trying to write to.
Does Ubuntu use SELinux? Check to see if it's enabled and enforcing. /var/log/audit/audit.log may be helpul (if that's where Ubuntu sticks it -- that's the RHEL/Fedora location).
I had the same problem on a CentOs 6.7
In my case all permissions were set and still the error occured. The problem was that the SE Linux was in the mode "enforcing".
I switched it to "permissive" using the command sudo setenforce 0
Then everything worked out for me.