HDFS access error from Cosmos global instance - fiware

I'm trying to access to my HDFS space by using the Hadoop fs commands.
I've followed the instuctions from:
http://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/BigData_Analysis_-_Quick_Start_for_Programmers
But after successfully logged in, the hadoop commands on fs fails.
$ ssh tcappellari#cosmos.lab.fiware.org
[tcappellari#cosmosmaster-gi ~]$ hadoop fs -ls /user/tcappellari
ls: Cannot access /user/tcappellari: No such file or directory.
Many thanks!

Your HDFS user space (/user/tcappellari) should be enabled now. This was an error when provisioning the account, possibly because the cluster was on safe mode.

Related

mysql best seucrity practices for permitting --local-infile to allow LOAD DATA LOCAL INFILE

Trying to use LOAD DATA LOCAL INFILE to import a csv file revealed the this command is not allowed with this MySQL version error.
So upon further reading I learn that SET GLOBAL --local-infile=1 can only be set if the mysql account is ROOT or setting it in the my.conf file (and restart mysql). For security purposes, my script that needs to import the CSV file is using a non-root mysql admin account. In fact all of my public facing scripts use a non-root admin account to open up mysql sessions. Thus with out a root account, it doesn't look like I can set it on the fly and then disable when the script is done.
Next option is to set it at the server level in my my.conf file and restart mysql. But once I enable --local-infile=1 on the server I have then exposed it to security issues. After that all a client needs to do mysql -u user -p password dbName --local-infile=1 for that session and then that client now has access. This def does not seem ideal...or am I wrong about this assumption?
The other option is then using LOAD DATA INFILE which apparently uses the root file systems /tmp directory to save files to and for mysql LOAD DATA INFILE to read from. But that then requires both the /tmp directory being globally available AND/OR a system admin having access to that directory. Unless I am root on the linux box, I can't write to that directory without opening it up globally. Opening up /tmp globally is itself a security issue.
Ideally, using a mysql non-root account, how can I enable --local-infile=1 temporarily to run my script and then disable it when done? Or...what is another method I can consider that would achieve the same result?

Creating PostgreSQL application in Openshift from shell

I'd need to create a PostgreSQL application in Openshift 4 using the 'oc' commandline.
I can see that by adding the app from the Catalog, the application is correctly added.
On the other hand, by using the following shell, the Pod goes into CrashLoopBackOff:
oc new-app -e POSTGRESQL_USER=postgres -e POSTGRESQL_PASSWORD=postgres -e POSTGRESQL_DATABASE=demodb postgresql
The following error is contained in the log file.
fixing permissions on existing directory /var/lib/postgresql/data ... initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
Which is the correct shell to start up PostgreSQL?
Thanks

MySQL LOAD DATA INFILE error 13 on a Pi

I am trying to use the LOAD DATA INFILE MySQL command on a Raspberry Pi running Raspbian. There are lot of similar questions on here but none seems to answer my problem exactly.
My code works fine on my Windows dev machine but on the Pi I get this error:
Can't get stat of '/var/www/transfer/categories.csv' (Errcode: 13)
Mysql statement is:
LOAD DATA INFILE '/var/www/transfer/categories.csv'
IGNORE INTO TABLE category
FIELDS TERMINATED BY ',' ENCLOSED BY '"';
The code is running in PHP and the database is MySQL.
The file and its '/transfer' folder have read permissions for World.
I have read a little about apparmor but can't see how to check or change how it is configured. There are 2 files in the /etc/apparmor.d folder. One is .usr.sbin.mysqld.swp but it doesn't seem to contain text and the other file refers to lightdm.
The database server and client is on the same server, so the LOCAL keyword doesn't apply.
My MySQL user has global privileges, so includes the FILE privilege.
I have checked the secure_file_priv setting and there is none.
I am sure this is some sort of permission or privilege issue, but I've run out of ideas. I want the file to live under the www folder because the system user has FTP rights to put it there. Ultimately I want to also create the file on the same machine but for now I'm happy to just read the file created under Windows.
$ errno 13
EACCES 13 Permission denied
Check your permissions; especially folder permissions. You can try su or sudo -u to the MySQL user and running ls -la /var/www/transfer/; if you don't see anything then you know the issue has to do with permissions of the folder and/or its contents.
If MySQL is running locally; to see which user: ps -elf | grep mysql
To switch to the MySQL user and test: sudo -u <mysql> bash

Can't open and lock privilege tables: Table 'mysql.user' doesn't exist

I installed MySQL community server 5.7.10 using binary zip. I extracted the zip in c:\mysql and created the data folder in c:\mysql\data. I created the config file as my.ini and placed it in c:\mysql (root folder of extracted zip). Below is the content of the my.ini file
# set basedir to your installation path
basedir=C:\mysql
# set datadir to the location of your data directory
datadir=C:\mysql\data
I'm trying to start MySQL using mysqld --console, but the process is aborted with the below error.
2015-12-29T18:04:01.141930Z 0 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist
2015-12-29T18:04:01.141930Z 0 [ERROR] Aborting
Any help on this will be appreciated.
You have to initialize the data directory by running the following command
mysqld --initialize [with random root password]
mysqld --initialize-insecure [with blank root password]
The mysql_install_db script also needs the datadir parameter:
mysql_install_db --user=root --datadir=$db_datapath
On Maria DB you use the install script mysql_install_db to install and initialize. In my case I use an environment variable for the data path. Not only does mysqld need to know where the data is (specified via commandline), but so does the install script.
mysqld --initialize to initialize the data directory then mysqld &
If you had already launched mysqld& without mysqld --initialize you might have to delete all files in your data directory
You can also modify /etc/my.cnf to add a custom path to your data directory like this :
[mysqld]
...
datadir=/path/to/directory
As suggested above, i had similar issue with mysql-5.7.18, i did this in this way
1. Executed this command from "MYSQL_HOME\bin\mysqld.exe --initialize-insecure"
2. then started "MYSQL_HOME\bin\mysqld.exe"
3. Connect workbench to this localhost:3306 with username 'root'
4. then executed this query "SET PASSWORD FOR 'root'#'localhost' = 'root';"
password was also updated successfully.
I had the same problem. For some reason --initialize did not work.
After about 5 hours of trial and error with different parameters, configs and commands I found out that the problem was caused by the file system.
I wanted to run a database on a large USB HDD drive. Drives larger than 2 TB are GPT partitioned! Here is a bug report with a solution:
https://bugs.mysql.com/bug.php?id=28913
In short words: Add the following line to your my.ini:
innodb_flush_method=normal
I had this problem with mysql 5.7 on Windows.
My problem was caused by an incorrect db restore.
When I dumed the db it also picked up the system mysql tables because I added a space after -p as mentioned here: mysqldump is dumping undesired system tables
Launching the docker instance would work, then I'd restore (and corrupt) the db and it would still keep running, but after restarting it would Exit with error code 1.
The solution was to dump and restore properly without the system tables.
I face the same issue with version Mysql 5.7.33 when the server has rebooted. I fix it by copy other server user files scp /var/lib/mysql/mysql/user.* root#dest:/var/lib/mysql/mysql.

Adding additional MySQL data folder to server. Ubuntu

Heres the deal. Removed mysql 5.0.xx and neglected to dump a data folder which is on a mounted drive.
I have mySql 5.6.5 now installed and running and the data folder works fine in the default directory. I attempted to switch the data dir in the my.conf file but that results in the error "The server quit without updating PID file."
What I would like to do is still have my.conf point at the default data directory while also adding the external database to MySQL. This is how I had it set up in mySql 5.0.xx. The only problem is I created the database via a GUI and specified that the data would actually be stored in the mounted drive. I can't quite figure out how to do this via the command line and I have found no good sources of documentation or examples.
You probably created a symbolic link to the directory on the mounted drive. This is done with the ln command:
cd /var/lib/mysql
ln -s /mounted_drive/data_directory/db_name db_name
On Ubuntu the MySQL data folder resides in /var/lib
Generally you can set this variable in my.cnf http://dev.mysql.com/doc/refman/5.6/en/server-options.html#option_mysqld_datadir in order to change the default data directory.