Just installed Typo3 Version 6, Government package on Linux Mint 14. Installed latest version of PHP & MYSQL. Compiled PHP with:
./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-openssl --with-zlib --enable-soap --enable-hash --with-pcre-regex --with-curl --with-gd --with-mysqlih-mysql --with-openssl --with-zlib --enable-soap
make
make install
Typo3 installer gives the following error:
There is no connection to the database!
(Username: typo3, Host: localhost, Using Password: YES)
Go to Step 1 and enter a valid username and password!
The username and password that I entered are correct. I am able to connect to mysql using these credentials with
phpMyAdmin web interface
mysql -u typo3 -p typo3db
Did Google search and found some people had this problem and the cause was one of the three things:
underscore in DB name - as up can see by my db name, it doesn't apply to my case
config not allowing persistent connections to mysql - doesn't apply to my case,since I am currently allowing persistent connections.
permissions of typo3 files - suggested fix was set all files to 755 or 777 (way to permissive in my opinion), but I tried this to rule out permission issue. Didn't resolve the issue in my case.
I enabled general logging for mysql. When I enter username and password for mysql user in the typo3 installer, it immediately gives the error that it can't connect, but mysql logs show no login attempt. Conversely, when I login with phpMyAdmin it does show the success and when I type in a wrong password on purpose, the log shows it is being denied.
This all implies to me that no connection from Typo3 to mysql is being initiated, but I don't now why.
Any thoughts on what the issue could be or what I should check next?
I had a similar issue, but found I had to use the server IP address and not localhost. Odd that localhost has always worked in the past without issue.
I had the exact same problem.
Fixed it by enabling required modules in PHP.
They are listed in the INSTALL.txt file in the root TYPO3 folder.
- fileinfo
- filter
- GD2
- JSON
- mysqli
- openssl
- pcre
- session
- SOAP
- SPL
- standard
- xml
- zlib
I believe the crucial module in this case was mysqli.
My solution is this:
The file access rights to /var/lib/mysql must be set in this manner:
[root#localhost phpmyadmin]# cd /var/lib
[root#localhost lib]# chmod 755 mysql
[root#localhost lib]# ls -ld mysql
drwxr-xr-x 16 mysql mysql 4096 Dez 16 20:14 mysql/
The mysql user and mysql group must be set also in all files and folders under /var/lib/mysql.
After this change I could login to the TYPO3 database using the Install Tool.
I had the same symptom with typo3 4.6, its installer didn't connect. I traced this down to this callstack:
class.ux_t3lib_db.php / sql_pconnect / handler_init
The handler_init function has this code:
if (!$cfgArray['config']['database']) {
// Configuration is incomplete
return;
}
The database property was empty. I had to set it in the localconf.php with this line $typo_db = "mydatabasename";
I also created my database with this name in MySQL.
marc_s,
Did you check your files permissions ?
For TYPO3 I use these command from TYPO3 installation folder :
find . -type d ! -name .svn -exec chmod 755 {} \; && find . -type f ! -name .svn -exec chmod 644 {} \;
chmod -R g+wX fileadmin typo3temp typo3conf uploads
You can also this a article :
http://dmitry-dulepov.com/article/migrating-typo3-installation-to-a-different-server.html
I hope this information will help.
Related
I am trying to mount a remote filesystem on Google Container Engine. I am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
Using following sshfs command:
sudo sshfs -o sshfs_debug,allow_other <instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce-container
I am getting error:
SSHFS version 2.5
read: Connection reset by peer
I referred this link https://cloud.google.com/sdk/gcloud/reference/compute/config-ssh
and could login using ssh via following command:
$gcloud compute config-ssh
$ssh <instance-name>.<region>.<project_id>
Any ideas what might be going wrong here? I can't understand what keys and username should I use for sshfs login.
Update(11/5):
I am using following command:
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce`
I have chowned /mnt/gce folder for my user. I checked the IP matches the entry in ~/.ssh/config file. However I still get the error read: Connection reset by peer
The problem with command below is that
1) unless you have a static IP, it keeps changing on machine reboot
2) you need to use .pub file
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce
I finally got it working by following command:
sudo mkdir /mnt/gce
sudo chown <user> /mnt/gce
sshfs -o IdentityFile=~/.ssh/google_compute_engine.pub <user_name>#<instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce
A few things that might be the cause of the problem:
Don't use sshfs as root. It's a FUSE filesystem and meant to be user mounted.
Don't specify a full path as the remote FS. It's SSH, so by default, the $PWD on the remote side is the login user's $HOME.
if ssh works, sshfs will work. The easiest way is to make sure that ~/.ssh/config has an entry for the remote host with the user, port, etc provided.
If you get this from sshfs
read: Connection reset by peer
maybe help to set file to read only
chmod 400 /{{path_to_your_key}}/keypair.pem
and connect again.
I have to "insert" a lot of files into an owncloud server (8.2).
A user give me a USB key with the files and tell me to copy of all them into his owncloud data files repository.
Do you know if is it possible ?
Is it possible to synchronyze the ownCloud data fileSystem with the ownCloud database?
My environment is Linux CentOS7 (Apache 2.4, mySQL 5.6, php 5.6)
Thanks,
owncloud brings a command line utility that allows to manually trigger some tasks. Among those is the files:scan function which re-scans a users file system.
So you can import those files by following these steps:
1. you copy the files into the physical file system of the user(s) inside ownclouds data folder
2. you fire the command line utility to re-scan the files. That takes care to update the database according to the files found.
This is an example for the manual trigger:
sudo -u www-data php occ files:scan <user name>
Here <user name> obviously has to be replaced. Also the account name the sudo command switches to depends on the linux distribution and its setup. The command has to be started inside ownclouds base folder. THe command can be called in a loop with different user names, that can be done by means of standard scripting.
Here is a documentation of the utility: https://doc.owncloud.org/server/8.0/admin_manual/configuration_server/occ_command.html
I just made a try myself using an owncloud-8.2 installation and succeeded.
Before I could sucessfully scan my files again as arkascha explained, I needed to change the ownder and the group of the new folder to www-data (for Debian OS - others see OC-Docu 1) and set rights of the new directory to 755
Change ownder:
sudo chown -R www-data:www-data <path>
Change rights:
sudo chmod 755 <path>
whwere is the path to the newly added directory and could for example look like this example: /media/hdd/owncloud/data/<username>/files/<newFolderName>
OC-Docu:
https://doc.owncloud.org/server/9.0/admin_manual/configuration_server/occ_command.html
I'm running django on Digital Ocean with gunicorn and nginx. Gunicorn for serving the django and nginx for static files.
Upon uploading a file via website, I cant save to a folder in /home directory. I get [Errno 13] Permission denied.
Please, how do I make the web server to be able have read write access to any arbitrary folder anywhere under /home?
This all depends on the user that your application is running as.
If you check ps aux | grep gunicorn which user the Gunicorn server is running your app as then you can change the chmod or chown permissions accordingly.
ls -lash will show you which user current only owns the folder and what permissions are on the folder you are trying to write to:
4.0K drwxrwx--- 4 username username 4.0K Dec 9 14:11 uploads
You can then use this to check for any issues.
Some docs on changing ownership and permissions
http://linux.die.net/man/1/chmod
http://linux.die.net/man/1/chown
I would advise being very careful to what locations on your disk you give access for the web server to read/write from. This can have massive security implications.
Well, I worked on this issue for more than a week and finally was able to FIGURE IT OUT.
Please follow links from digital ocean , but they did not pinpoint important issues one which includes
no live upstreams while connecting to upstream
*4 connect() to unix:/myproject.sock failed (13: Permission denied) while connecting to upstream
gunicorn OSError: [Errno 1] Operation not permitted
*1 connect() to unix:/tmp/myproject.sock failed (2: No such file or directory)
etc.
These issues are basically permission issue for connection between Nginx and Gunicorn.
To make things simple, I recommend to give same nginx permission to every file/project/python program you create.
To solve all the issue follow this approach:
First thing is :
Log in to the system as a root user
Create /home/nginx directory.
After doing this, follow as per the website until Create an Upstart Script.
Run chown -R nginx:nginx /home/nginx
For upstart script, do the following change in the last line :
exec gunicorn --workers 3 --bind unix:myproject.sock -u nginx -g nginx wsgi
DONT ADD -m permission as it messes up the socket. From the documentation of Gunicorn, when -m is default, python will figure out the best permission
Start the upstart script
Now just go to /etc/nginx/nginx.conf file.
Go to the server module and append:
location / {
include proxy_params;
proxy_pass http<>:<>//unix:/home/nginx/myproject.sock;
}
REMOVE <>
Do not follow the digitalocean aricle from here on
Now restart nginx server and you are good to go.
Change the owner of /home
See actual owner $ ls -l /
f1 f2 f3 f4 f5 f6 f6 f8 f9 f10
- rwx r-x r-x 1 root root 209 Mar 30 17:41 /home
https://www.garron.me/en/go2linux/ls-file-permissions.html
f2 Owner permissions over the file or directory
f3 Group permissions over the file or directory
f4 Everybody else permissions over the file or directory
f6 The user that owns the file or directory
Change folder owner recursively sudo chown -R ubuntu /home/ substitute ubuntu with a non-root user.
Good practices
Use a subdirectory home/ubuntu as server directory, ubuntu folder have ubuntu user as owner.
Set user-owner permissions to all. Your group and other users to read-only sudo chmod -R 744 /home/ubuntu/
I changed the ownership of the file which is containing my images
chown -R www-data: /myproject/media/mainsite/images
Change the path accordingly and also restart server. In my case its apache2 so
sudo service apache2 restart
In my case it was something very simple that was generating a similar error, I just had to check the user who controlled Gunicorn and the user who controlled NGINX, they had different permissions.
I'm a new fish for hadoop.I installed Ubuntu 12.10 on my computer and I wanna install Hadoop in pseudo-distributed mode on one single node.I searched and get lots of tutorials but I have a problem with the SSH.I did what the tutorial said.
I am sure the problem is about the SSH.I get the openssh-server,and had done this:
hadoop00#WebsoftStation:~$ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa
hadoop00#WebsoftStation:~/.ssh$cat ~/.ssh/id_dsa.pub >> authorized_keys
Then I can successfully ssh my localhost like this:
hadoop00#WebsoftStation:~$ssh localhost
It worked.
So I changed the path to hadoop and then:
hadoop00#WebsoftStation:/usr/local/hadoop$ sudo bin/start-all.sh
[sudo] password for hadoop00:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-WebsoftStation.out
root#localhost's password:
root#localhost's password: localhost: Permission denied, please try again.
So,what's the problem?
You have setup password-less ssh for only your current account. Since, when you can use ssh localhost without any problem, the thing you need to do next is giving execution permission to your scripts.
Execute the following commands:
chmod +x bin/*.sh ---> assigns execution permission to all the scripts
./start.all ----> executes the script
Note: Hadoop can also be run without having password-less ssh setup using hadoop-daemon.sh script. The only advantage with password-less ssh is that, the ./start.all, script will take the trouble of doing that on behalf of you in each of the nodes.
You need to change permissions for your Hadoop folder to be owned by the hadoop00 user:
cd /usr/local/
sudo chown -R hadoop00:hadoop00 /usr/local/hadoop
Then you can cd into the sbin folder and run things without sudo. If you use sudo you're running the scripts as root which has different environment variables etc which is why you have a different behavior.
Why are you using sudo this is clearly a permission problem.
Try running this without sudo
bin/start-all.sh
I need a generic way to install MySQL 5.5 in almost any Linux OS from non-root User. Hence I thought to install MySQL from its source and install it where ever I need.
Is it really possible to install MySQL in non-root user home?
Anybody have any idea for this? Please do share your expertise for the same.
Major constraint here is that, I need to install MySQL 5.5 from any non-root User in a Generic way And possibly for almost any Linux OS.
Any suggestion would be appreciated.
Thanks.
CONCLUSION
I've tried with Ubuntu-11.10, finally I was able to install MySQL-5.5 from non-root user with the constraint that MySQL is not accessible from console/command prompt. As mysqld is up and running fine hence MySQL was easily accessible via any GUI tool which connects to MySQL via JDBC connectors. If you try to access mysql from command prompt using
mysql -u root -p
command it was giving segmentation fault problem. One more thing I tried for Fedora Linux also from Non-Root user, in that mysqld was failing and can't access mysql anyway :( .
You should customize this three variables:
MYSQL_DATADIR
SYSCONFDIR
CMAKE_INSTALL_PREFIX
Example:
$ cd <mysql_src_dir>
$ cmake -i .
Would you like to see advanced options? [No]:Yes
Please wait while cmake processes CMakeLists.txt files....
...
Variable Name: CMAKE_INSTALL_PREFIX
Description: install prefix
Current Value: /usr/local/mysql
New Value (Enter to keep current value): /home/user/mysql
...
Variable Name: MYSQL_DATADIR
Description: default MySQL data directory
Current Value: /usr/local/mysql/data
New Value (Enter to keep current value): /home/user/mysql/data
...
Variable Name: SYSCONFDIR
Description: config directory (for my.cnf)
Current Value: /usr/local/mysql/etc
New Value (Enter to keep current value): /home/user/mysql/etc
$ make
$ make install
Instead of cmake -i . you can use cmake -D MYSQL_DATADIR=/home/user/mysql/data -D SYSCONFDIR=/home/user/mysql/etc -D CMAKE_INSTALL_PREFIX=/home/user/mysql .
I imagine this should be possible but not necessarily easy to do. You would need to build from source and change the Makefile so that the install target points to the user's local directory; additionally, I would think that you'd need to mess around with other default configuration options in MySQL, which can also be changed from configuration files. In the end, you should be able to launch mysqld as long as you don't bind to any port below 1000 (which is the case anyway since MySQL runs by default on port 3306).