Currently I try to migrate a typo3 based Webserver to a new machine. (its my first migration, so please don't judge if I did smth wrong).
What I did so far:
transfer Files via wget on new machine
create dbdump with mysqldumb
transfer dump with wget
create database with mysql source <dumpfile.sql>
create user with access to the db
When I try to connect with the server, typo3 doesn't response.
And when I try to install typo3 from skretch and replace the new database with the old one, I also run into internal server errors.
Is there a solution on how to migrate the database correctly?
Yours Sincerely,
Sebastian
Mh,
this should not be an issue in general.
We often use following steps:
[SRC] BackupDatabase: MYSQL_PWD="DBPASS" mysqldump -uDBUSER --opt -e -Q --skip-comments --single-transaction=true | gzip >dump.sql.gz
[SRC] Pack the installation and the used core: tar -czf transfer.tar.gz ./typo-webfolder ./typo3_src-VERSION
Transfer both .gz files to new server (wget, scp, ftp etc )
[NEW] Deflate files: tar -xzf transfer.tar.gz
[NEW] Create a empty database, using your fav tool
[NEW] Import database: gunzip <dump.sql.gz | MYSQL_PWD="DBPASS" mysql -uDBUSER [-hDBHOST] NEWDBNAME
[NEW] Adjust DatabaseCredentials in `typo3conf/LocalConfiguration.php'
[NEW] Recheck symlinks (typo3_src, typo3, index.php)
[NEW] Recheck .htaccess files - maybe missed to pack and transfer ?
[NEW] Create FlagFile touch typo-webfolder/typo3conf/ENABLE_INSTALL_TOOL
[NEW] Open install tool in Webbrowser ( http://newdomain.tld/typo3/install ), checking requirements, maybe fixing folderstructure and so on, clearing all caches
Eventually clear the typo3temp folder (can be repopulated by the system)
In our projects, we are setting the DB Credentials through AdditionalConfiguration.php based on Enviroment Variables (read from a .env file )
So in generell there should not be any issues, but withour more information it is hard to help you further.
Some things:
Proxy/TrustedProxy settings
DomainRecord Settings in the Database ( sys_domain )
RealUrl Config With DomainName based settings
.htaccess Canonical rewrite rules based on domain/hostname
Missing needed php modules etc., wrong php version, checking php error log
in general your workflow is usable. (don't forget the filesystem fileadmin/ and typo3conf/ext/)
but there are some traps.
be sure to delete the corresponding caches for all changes in filesystem or database.
if you transfer the database: make sure you always use UTF-8 coding of everything!
regarding filesystem: there could be thumbnails or other resized images (folder __processed__/) but there also are entries in the database for each file and each resizing.
all extensions or configuration are cached in typo3temp/Code/*, also have in mind the autoloader files.
in most cases you can do a clean-up in the install tool.
so the first thing should be:
start the install tool, do all checks and remove all temporary information.
Related
I was trying to setup a scheduled task in Linux Ubuntu Server 12.04 (CronJob) to perform a daily backup of all my MySQL Databases on midnight.
I have installed the known Webmin (A nice web interface for managing the Web Servers).
So my issue is: whenever the backup is being performed, the files are getting overwritten!
That means: The backup of the day before yesterday are LOST, only the "Yesterday" backup is being saved!
I have tried something like setting dynamic file path like:
/var/www/mysqlbackups/%d-%m-%y
but I had no success with that :(
Can anybody help me.
Thanks alot guys.
MySQL Database Server > Module Config > Chose Yes on "Do strftime substitution of backup destinations?"
It work for me! :)
You can use dynamic path, but before you must enable it in module config:
System > Filesystem Backup > Module Config > Do strftime substitution of backup destinations? > Yes
(If you are not sure about placeholders, just click on the text "Do strftime substitution of backup destinations?" in the config and help will show you.)
I had the same problem and solved like this :
From Server select MySQL Database Server.
Go to Module config (top left), and select Do strftime substitution of backup destinations? to Yes.
I use db_%d-%m-%Y.sql format for backups. (db_13-03-2013.sql)
I made a shell script (not for webmin). Put it in /etc/cron.daily.
The scripts makes a backup of the database (stores it as .gz), then uploads it by ssh to another server. For the auth. i setup ssh keys, so no password is needed.
The backup files have a unique name, so you don't overwrite the backup files.
This is how you can create a filename within script:
now=`date +%Y%m%d_%H%M`
dst_path=/var/local/backups
filename="$dst_path/$database.$now.sql.gz"
Then you should write a a small script that removes all backup files that are older then x days.
#!/bin/sh
#
# Creates a backup of a MySQL databases and uses ssh (sFTP) to send if to another server
# This script shouldbe called from the crontab
PATH=/usr/sbin:/usr/bin:/sbin:/bin
# MySQL user and password
mysql_cmd=/opt/bitnami/mysql/bin/mysqldump
mysql_usr=user_name
mysql_pass=password
# destination ssh
dst_user=user_name
dst_hostname=192.168.1.1
# Database to backup
database=test
# create timestamp
now=`date +%Y%m%d_%H%M`
# where we store the files
dst_path=/var/local/backups
# backup filename
filename="$dst_path/$database.$now.sql.gz"
dst_filename="$database.$now.sql.gz"
# run backup
$mysql_cmd -u $mysql_usr --password=$mysql_pass $database | gzip > $filename
# upload to sever (ssh)
scp $filename $dst_user#$dst_hostname:
You can do it in following way
Webmin > MySQL Database Server > select database > backup Database > file path
in Other backup option you can use > Command to run after backup
Enter ======> mv file path/filename file path/filename_date +"%Y%m%d%H%M%S".sql
I have MySQL set up correctly on my linux computer, however I want a better way to input data into the database besides terminal. For this reason, I downloaded phpMyAdmin. However, when I try to log in to the phpMyAdmin from index.php, it doesnt do anything. It seems to just refresh the page without doing anything. I am putting in the correct MySQL username and password. What is the issue?
Here is a screen shot of what it shows after I click "go".
This is a possible issue when the path to save php_session is not correctly set :
The directory for storing session does not exists or php do not have sufficient rights to write to it.
To define the php_session directory simply add the following line to the php.ini :
session.save_path="/tmp/php_session/"
And give write rights to the http server.
usually, the http server run as user daemon in group daemon. If it is the case, the following commands will make it :
chown -R :daemon /tmp/php_session
chmod -R g+wr /tmp/php_session
service httpd restart
Login fails if session folder in not writeable. To check that, create a PHP file in your web directory with:
<?php
$sessionPath = 'undefined';
if (!($sessionPath = ini_get('session.save_path'))) {
$sessionPath = isset($_ENV['TMP']) ? $_ENV['TMP'] : sys_get_temp_dir();
}
if (!is_writeable($sessionPath)) {
echo 'Session directory "'. $sessionPath . '"" is not writeable';
} else {
echo 'Session directory: "' . $sessionPath . '" is writeable';
}
If session folder is not writeable do either
sudo setfacl -R -m u:www-data:rwx <session directory> or chmod 777 sudo setfacl -R -m u:www-data:rwx <session directory>
-
I am late to the game, but on Amazon linux AMI I could not log in to phpmyadmin ... it just kept refreshing the login screen with no errors.
I have fixed with below command
sudo chmod -R 755 /var/lib/php/session
I fixed my issue on CentOS 7 with MariaDB and phpmyadmin I downloaded from offical phpmyadmin site by adding
session.save_path = "/var/lib/php/session"
to /etc/php.ini
and
chown -R :lighttpd /var/lib/php/session
I also restarted php-fpm and lighttpd after
In my case the solution was to set an Apache setting properly:
ProxyPassReverseCookiePath
This was required, because ProxyPass and ProxyPassReverse were in use, but cookie paths are not changed automatically.
It'd be great if PHPMyAdmin had shown something like session not found or anything, when password is sent with POST.
Do you have a .htaccess file in one of the parent directories that strips off index.php from the url by doing a 301 redirect?
301 redirects discard the form data and redirect you as if you didn't submit anything. So you get returned to the login page.
So you should create a local .htaccess file in the phpmyadmin directory with a single line RewriteEngine On. This will overwrite the previous rewrite rule to nothing.
You may need to clear the browser cache as Chrome aggressively caches 301 redirects.
In my case the hard drive was full.
Use df -h to check the space left on your hard drive, and if you want you can free some space by using the command sudo apt-get clean, which removes installation files.
I hope this will help some future users.
I ran these commands and it worked for me:
sudo service httpd restart
sudo service mysqld stop
sudo service mysqld start
Try searching the web for installation or setup guides for phpMyAdmin. Look at two or three of these and make sure you have covered all the required steps. (If you have already done so, please include which guides you have followed it in the question).
See if it helps to edit config.inc.php (acecoder mentioned this as well).
Check if this guide is of any help.
Which distro are you on? Try searching for the name of the distro you are using together with "phpMyAdmin guide" or "phpMyAdmin setup howto".
If you encounter errors along the way, post the error text here, if it's short (or paste via a pastebin-like site if it's long).
Are you sure that mysql is running? I had the same issue after doing a database import and filling up the volume containing the mysql database. After changing various permissions and clearing sessions, I tried to restart mysql (/etc/init.d/mysql restart) and it failed because the volume was full. After increasing /var and starting mysql successfully, I was able to log into phpmyadmin just fine.
If you have an error like:
Host 'host_name' is blocked because of many connection errors.
Login in your mysql as root and run the flush hosts command
1.- mysql -u root -p
2.- mysql > flush hosts
After this I was able to login again in phpmyadmin
phpMyAdmin will show errors when login fails. If it doesn't, it means that your setup has an error.
The most likely place to check is your php.ini settings. Since there doesn't seem to be an official list of phpMyAdmin-compatible settings, it's mostly trial and error.
Make sure you have enabled the stuff that needs to be enabled. Also check that you did not enable uncommon php.ini settings (like enable_post_data_reading = Off) because phpMyAdmin assumes them to be "the usual ones".
To ease debugging, start with a clean default php.ini file then tweak them line by line to see which setting is causing the error. (Don't forget that you need to restart your server after changing the php.ini file for the changes to take place.)
In my case it was due to an old Apache session.
Stop Apache, clear all pending sessions in your sessions.save_path directory (example: /var/lib/php/session) and restart Apache.
Make sure to set a 32 chars long random key in 'config.inc.php' in the $cfg['blowfish_secret'] value. That solved it for me.
Didn't realize I need to restart MariaDB after modifying config.inc.php:
service mariadb restart
Otherwise at least in my case changes didn't come affect. Also make sure your php session directory is writable by webserver (typically session.save_path = "/var/lib/php/session")
I'm trying to create an RPM in Fedora 15 that will install my software, but in order for my software to work correctly once installed, I also need to edit other (configuration) files on the system, add users/groups, etc. Performing some of these tasks is only allowed by the root user. I know to never create an RPM as the root user, and I understand why that is such a bad idea. However, if I add shell script statements to my spec file (%post, %prep... any section) to edit the necessary files, add users/groups, etc., my rpmbuild command fails with message "Permission denied" (not surprisingly).
What's the best way to handle this? Do I have to tell my users to install my package first, and then perhaps run a shell script as root to configure it all? That doesn't seem very elegant. I was hoping to allow a user to do everything with one simple command such as 'yum install mysoftware'.
Much of my research suggests that perhaps this shouldn't even be done via RPM. I've read many parts of Maximum RPM, and lots of other good resources, but haven't found what I'm looking for. I'm new to creating RPMs, but have already been able to successfully create a simple spec file for my software... I just can't get everything configured properly after the package is unzipped and installed to the correct location. Any input is greatly appreciated!
useradd should be run in %pre and shouldn't run during rpmbuild. That's the standard way of doing it. I would recommend the packaging guidelines and specifically the section on users and groups.
The %pre section of your RPM .spec file should check for all the conditions necessary for your software to install.
The %post section of your RPM .spec file should make all the modifications needed for your software to run.
To avoid file permission errors in the %post section of your RPM .spec file, you can set the file permissions and ownership in the %files section. That way, the user who installs the RPM has the appropriate permissions to modify the configuration files.
%install
# Copy files to directories on your installation server
%files
# Set file permissions and ownership on your installation server
%attr(775, myuser, mygroup) /path/to/my/file
%pre
# Check if custom user 'myuser' exists. If not, create it.
# Check if custom group 'mygroup' exists. If not, create it.
# All other checks here
%post
# Perform post-installation steps here, like editing other (configuration) files.
echo "Installation complete."
I have a large amount of text data I need to import into MySQL. I'm doing this on a MacBook and don't have enough space for it so I want to store it in an external hard drive (I'm not really concerned about speed at this point - this is just for testing).
What's the best way to do it?
Install MySQL on the external hard drive (is this possible on a Mac?)
Install MySQL on the laptop's hard drive and have the tables on the external (how?)
One simple hack is to create an symbolic link replacing your current mysql database file location pointing to the external disk. Google symbolic link.
sample usage would be after you shutdown mysql, change the old mysql db folder name to something else, and create the symbolic link using the ln command like below
ln -s [EXTERNAL DRIVE PATH] [MYSQL DB FOLDER PATH]
Then move all the previous content of the mysql db folder to the new location.
Open /etc/mysql/my.cnf and find the value of the datadir. Alternatively, you can find this out in the mysql monitor with
mysql> select ##datadir;
Stop mysql
sudo systemctl stop mysql
Copy the data from there to your external drive
sudo rsync -av /var/lib/mysql /mnt/myHDD/somedir/mysql
Modify the location of the datadir in my.cnf.
Start mysql again
sudo systemctl start mysql
Verify that everything is still fine and remove the original data dir.
This page contains a more extensive guide but all the additional issues it warns about were not relevant for me on my raspberry PI. I.e. I skipped them and it worked.
For the second option, a tablespace might do the trick:
http://dev.mysql.com/doc/refman/5.1/en/create-tablespace.html
User user658991 answer is halfway there.
After adding the soft link, you will need to add the following line to /etc/apparmor.d/usr.sbin.mysqld beneath the 2 lines to the old mysql folder.
/path/to/mysql/folder/on/the/external/ r
/path/to/mysql/folder/on/the/external/ ** rwk
Without these 2 lines, MySQL fails to start complaining of:
Can't create test file /path/to/mysql/folder/on/the/external/hostname.lower-test
Can't create test file /path/to/mysql/folder/on/the/external/hostname.lower-test
mysqld: Can't change dir to '/path/to/mysql/folder/on/the/external/' (Errcode: 13)
Restart apparmor for the changes to take effect.
sudo invoke-rc.d apparmor restart
With this, MySQL starts normally.
I use GVIM on Ubuntu 9.10. I'm looking for the right way to configure GVIM to be able to edit remote files (HTML, PHP, CSS) by for exemple ftp.
When i use :e scp://username#remotehost/./path/to/file i get: error detected while processing BufEnter Auto commands for "*":E472: Command failed.
When i open a file on remote via Dolphin or Nautilus, i cannot use other files with NERDTree.
Finally when i edit on remote a file via Dolphin the rights are changing to access interdit.
So how to use GVIM to edit remote files like on my localhost?
I've found running the filesystem over ssh (by means of sshfs) a better option than having the editor handle that stuff or running the editor itself over an ssh tunnel.
So you need to
apt-get install sshfs
and then
sshfs remoteuser#remotehost:/remote/path /local/mountpoint
And that will let you edit your remote files as if they were on your local file system.
To make it even smoother you can add a line to /etc/fstab
sshfs#remoteusername#remotehost:/remote/path /local/mountpoint fuse user,noauto
For some reason I find that I have to use fusermount -u /local/mountpoint rather then just umount /local/mountpoint when experimenting with this. Maybe that's just my distro.
Recently I've also noted that the mounting user must be in the fuse group. So:
sudo addgroup <username> fuse
An other popular option of course, would be to run vim (rather then gvim) inside a GNU Screen session on one machine and connect to that session via ssh from wherever you happen to be. Code along all day at work and in the evening you ssh into your office computer, reattach to your gnu screen session and pick up exactly where you left off. I used find the richer color palette to be the only thing I really missed from gvim when using vim, but that can actually be fixed thanks to a fork of urxvt that will let you customize the entire 256 position color palette, not just the 16 first positions of the palette that most terminal emulators will let you customize.
There is one way and that is using the remote host's copy, using SSH to forward the X11 client to you, like so:
user#local:~/$ ssh -X user#host
...
user#host:~/$ gvim file
The latter command should open gvim on your desktop. Of course, this relies on the remote host having X11 / gnome / gvim installed in the first place, which might not be the solution you're looking for / an option in your case.
Note: X11 forwarding can be a security risk.
In order for netrw to work seamlessly, I believe you need to not be in compatibility mode.
Try
:set nocompatible
then
:edit scp://host/path/to/file
Try this
:e scp://username#remotehost//path/to/file
Note that the use of // is intentional after remotehost it gives the absolute path of your file
:)
http://www.celsius1414.com/2009/08/19/how-to-edit-remote-files-with-local-vim/
The vim tips wiki has an article on this, Editing remote files via scp in vim.
EDIT: Key authentication is not necessary for opening files over ssh. Vim will prompt for password.
It would be useful to note if netrw.vim was loaded by vim when it started.
:echo exists("g:loaded_netrwPlugin")
For opening files over ssh, you need your local machine's public key in the server's authorized keys. Following help section in vim documentation explains it pretty well.
:help netrw-ssh-hack
Quick way to export public key would be by using ssh-copy-id (if available).
ssh-copy-id user#host
And have a look at netrw documentation for network file editing over other protocols.
:help netrw
HTH.
According to the docs BufEnter is processed after the file has been read and the buffer created, so my guess is that netrw successfully read the file but you have a plugin that assumes the file is on the local filesystem and is trying to access it, e.g. to run ctags.
Try disabling all your plugin scripts except the default Vim ones, and then editing the file.
Also, try editing a directory to see if netrw can read that - you need to put the / on the end so that netrw knows it is a dir.
About your command, :e scp://username#remotehost/./path/to/file : note that with netrw, scp is taken relative to your home directory on that remote host. To avoid home-relative pathing, drop that "."; ie. :e scp://username#remotehost//path/to/file .
to accomplish this on windows download/install the Dokan library and Dokan SSHFS, which are the first and last links on this page.
I didn't think you were going to be able to directly edit a remote file using GVIM running locally. However, as others have pointed out, this is defintiely possible. This looks very interesting; I will check this out. I will leave the rest of my post up here, in case it is useful to anyone else, as an alternative method. This method will work even if you don't have SSH access to the file (ie, you only have FTP, or S3, or whatever).
You may get that effect, though, by tying GVIM into a graphical file transfer application. For example, on OS X, I use CyberDuck to transfer files (FTP, SFTP, etc). Then, I have it configured to use GVIM as my editor, so I can just double-click on a file in the remote listing, and CyberDuck will download a copy of that remote file, and open it in GVIM. When I save it in GVIM, CyberDuck uploads the file back to the remote host.
I'm sure that this functionality is not unique to CyberDuck, and is probably present in most nicer file transfer utilities.