how to reinitialize the database - mysql

I have downloaded a demo copy of Hybris for evaluation purposes, and it has been more than 30 days since I downloaded it, and recently I tried to restart it, but it would not, and instead gave me the following message:
"This licence is only for demo or develop usage and is valid for 30 days.
After this time you have to reinitialize database to continue your work."
I am/have been running it on a Mac, and the database is MySQL...
What (UNIX) commands do I use to re-initialise the database, so that I can start up the Hybris Server?

Using command line in the Terminal application - goto YOURPATH/hybris/bin/platform and run the ant clean all then ant initialize command then start hybris:
1) Goto your platform directory
cd $YOURPATH/hybris/bin/platform
2) Set ant's environment by runing "dot" "space" "dot-slash" setantenv.sh
. ./setantenv.sh
3) Then run ant clean all (to clean environment)
ant clean all
4) then run ant initialize (to re-initialize environment)
ant initialize
5) Re-start the hybris server process by running hybrisserver.sh
./hybrisserver.sh
6) have a nice rest of your day! (if this helped you then please give an UP vote - thanks!)
:)

you can use Ant command ant initialize and error will go away

Ant initialize would removes tables that exists in Hybris items.xml files? If you want to reset your DB i have a script that i use across various projects (can be found here, on GitHub)
#!/bin/bash
MUSER="$1"
MPASS="$2"
MDB="$3"
# Detect paths
MYSQL=$(which mysql)
AWK=$(which awk)
GREP=$(which grep)
if [ $# -ne 3 ]
then
echo "Usage: $0 {MySQL-User-Name} {MySQL-User-Password} {MySQL-Database-Name}"
echo "Drops all tables from a MySQL"
exit 1
fi
TABLES=$($MYSQL -u $MUSER -p$MPASS $MDB -e 'show tables' | $AWK '{ print $1}' | $GREP -v '^Tables' )
for t in $TABLES
do
echo "Deleting $t table from $MDB database..."
$MYSQL -u $MUSER -p$MPASS $MDB -e "drop table $t"
done

You need to reinitialize, [ant all] and rebuild hybris as you have did in first time:
Reason : Evaluation copy works only for 30 days and after it will be expired.
When you start your server it will show in console like below image. Pls Check.

Yo can also use Hybris Administration Console to initialization
Platfrom -> Initialization

Related

Shell script to check if mysql is up or down

I want a bash shell script that i can run using a cron job to check if mysql on a remote server is running. If it is, then do nothing, other start the server.
The cronjob will be checking the remote server for a live (or not) mysql every minute. I can write the cron job myself, but i need help with the shell script that checks if a remote mysql is up or down. The response after a check if up or down is not important. But the check is important.
You can use below script
#!/bin/bash
USER=root
PASS=root123
mysqladmin -h remote_server_ip -u$USER -p$PASS processlist ###user should have mysql permission on remote server. Ideally you should use different user than root.
if [ $? -eq 0 ]
then
echo "do nothing"
else
ssh remote_server_ip ###remote server linux root server password should be shared with this server.
service mysqld start
fi
The script in the selected answer works great, but requires that you have the MySQL client installed on the local host. I needed something similar for a Docker container and didn't want to install the MySQL client. This is what I came up with:
# check for a connection to the database server
#
check=$(wget -O - -T 2 "http://$MYSQL_HOST:$MYSQL_PORT" 2>&1 | grep -o mariadb)
while [ -z "$check" ]; do
# wait a moment
#
sleep 5s
# check again
#
check=$(wget -O - -T 2 "http://$MYSQL_HOST:$MYSQL_PORT" 2>&1 | grep -o mariadb)
done
This is a little different, in that it will loop until a database connection can be made. I am also using MariaDB instead of the stock MySQL database. You can change this by changing the grep -o mariadb to something else - I'm not sure what MySQL returns on a successful connection, so you'll have to play with it a bit.

Mysql Auto Backup on ubuntu server

After months of trying to get this to happen I found a shell script that will get the job done.
Heres the code I'm working with
#!/bin/bash
### MySQL Server Login Info ###
MUSER="root"
MPASS="MYSQL-ROOT-PASSWORD"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
BAK="/backup/mysql"
GZIP="$(which gzip)"
### FTP SERVER Login info ###
FTPU="FTP-SERVER-USER-NAME"
FTPP="FTP-SERVER-PASSWORD"
FTPS="FTP-SERVER-IP-ADDRESS"
NOW=$(date +"%d-%m-%Y")
### See comments below ###
### [ ! -d $BAK ] && mkdir -p $BAK || /bin/rm -f $BAK/* ###
[ ! -d "$BAK" ] && mkdir -p "$BAK"
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
FILE=$BAK/$db.$NOW-$(date +"%T").gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
lftp -u $FTPU,$FTPP -e "mkdir /mysql/$NOW;cd /mysql/$NOW; mput /backup/mysql/*; quit" $FTPS
Everything is running great, however there are a few things I'd like to fix but am clueless when it comes to shell scripts. I'm not asking anyone to write it. Just some pointers. First of all the /backup/mysql directory on my server stacks the files everytime it backs up. Not to big of a deal. But after a year of nightly backups it might get a little full. So id like it to clear that directory after uploading. Also I don't want to overload my hosting service with files so id like it to clear the remote servers dir before uploading. Lastly I would like it to upload to a subdirectory on the remote server such as /mysql
Why reinvent the wheel? You can just use Debian's automysqlbackup package (should be available on Ubuntu as well).
As for cleaning old files the following command might be of help:
find /mysql -type f -mtime +16 -delete
Uploading to remote server can be done using scp(1) command;
To avoid password prompt read about SSH public key authentication
Take a look at Backup, it allows you to model your backup jobs using a Ruby DSL, very powerful.
Support multiple DBs and most popular online storages, and lots of cool features.

Google Compute Engine Startup Script Cant Execute

I am having trouble getting the following startup-script to execute properly when launching a Compute Engine Instance (GCE).
#! /bin/bash
setup vncserver
vnc4server -geometry 1440x900 :1
export DISPLAY=:1
echo "completed"
The script is read by GCE but does not execute the commands and the log shows blank lines with a message in-between which is the key to the problem but I cant seem to solve it.
Log shows the following:
Feb 3 09:15:33 simpleapache3 startupscript: Running startup script /var/run/google.startup.script
Feb 3 09:15:34 simpleapache3 startupscript:
Feb 3 09:15:34 simpleapache3 startupscript: You will require a password to access your desktops.
Feb 3 09:15:34 simpleapache3 startupscript:
How do I get around the "You will require a password..." section?
Tried:
I tried adding in a password inside the script like this but no luck...
#! /bin/bash
#setup vncserver
vnc4server -geometry 1440x900 :1
myPassword123
export DISPLAY=:1
echo "completed"
Notes:
I have got VNC4SERVER already installed on the persistent disk I am adding.
If I ssh into the instance and run the commands manually they work perfectly and I am not asked for a password.
Any help would be greatly appreciated...
I suspect this is because the startup scripts run as root rather than your user.
This script works for me:
#! /bin/bash
echo "I am: " `whoami`
sudo -u briandorsey DISPLAY=:1 vnc4server -geometry 1440x900 :1
echo "completed"
Replace briandorsey with your username.
Also, don't forget to create a firewall rule to allow vnc traffic. This can be done via the Console or with gcutil:
gcutil addfirewall vnc2 --allowed=tcp:5901
This will allow traffic on port 5901 to all virtual machines in your project. See the firewall docs for information on how to limit access further.

Issues with MySQL restart on running through a crontab scheduler

I have written a shell script which starts MySQL when its killed/terminated. I am running this shell script using a crontab.
My cron looks for the script file named mysql.sh under /root/mysql.sh
sh /root/mysql.sh
mysql.sh:
cd /root/validate-mysql-status
sh /root/validate-mysql-status/validate-mysql-status.sh
validate-mysql-status.sh:
# mysql root/admin username
MUSER="xxxx"
# mysql admin/root password
MPASS="xxxxxx"
# mysql server hostname
MHOST="localhost"
MSTART="/etc/init.d/mysql start"
# path mysqladmin
MADMIN="$(which mysqladmin)"
# see if MySQL server is alive or not
# 2&1 could be better but i would like to keep it simple
$MADMIN -h $MHOST -u $MUSER -p${MPASS} ping 2>/dev/null 1>/dev/null
if [ $? -ne 0 ]; then
# MySQL's status log file
MYSQL_STATUS_LOG=/root/validate-mysql-status/mysql-status.log
# If log file not exist, create a new file
if [ ! -f $MYSQL_STATUS_LOG ]; then
cat "Creating MySQL status log file.." > $MYSQL_STATUS_LOG
now="$(date)"
echo [$now] error : MySQL not running >> $MYSQL_STATUS_LOG
else
now="$(date)"
echo [$now] error : MySQL not running >> $MYSQL_STATUS_LOG
fi
# Restarting MySQL
/etc/init.d/mysql start
now1="$(date)"
echo [$now1] info : MySQL started >> $MYSQL_STATUS_LOG
cat $MYSQL_STATUS_LOG
fi
When I run the above mysql shell script manually using webmin's crontab, MySQL started successfully (when its killed).
However, when I schedule it using a cron job, MySQL doesn't starts. The logs are printed properly (it means my cron runs the scheduled script successfully, however MySQL is not restarting).
crontab -l displays:
* * * * * sh /root/mysql.sh
I found from URL's that we should give absolute path to restart MySQL through schedulers like cron. However, it haven't worked for me.
Can anyone please help me!
Thank You.
First, crontab normaly looks like this:
* * * * * /root/mysql.sh
So remove the surplus sh and put it at the beginning of the script - #!/bin/bash I suppose (why are you referring to sh instead of bash?) and don't forget to have an execute permission on the file (chmod +x /root/mysql.sh)
Second, running scripts within crontab is tricky, because the environment is different! You have to set it manually. We start with PATH: go to console and do echo $PATH, and then copy-paste the result into export PATH=<your path> to your cron script:
mysql.sh:
#!/bin/bash
export PATH=.:/bin:/usr/local/bin:/usr/bin:/opt/bin:/usr/games:./:/sbin:/usr/sbin:/usr/local/sbin
{
cd /root/validate-mysql-status
/root/validate-mysql-status/validate-mysql-status.sh
} >> OUT 2>> ERR
Note that I also redirected all the output to files so that you don't receive emails from cron.
Problem is how to know which other variables (besides PATH) matter. Try to go through set | less and try to figure out which variables might be important to set in the cron script too. If there are any MYSQL related variables, you must set them! You may also examine the cron script environment by putting set > cron.env to the cron script and then diff-ing it against console environment to look for significant differences.

Automatically Backup MySQL database on linux server

I need a script that automatically makes a backup of a MySql Database. I know there are a lot of posts and scripts out there on this topic already but here is where mine differs.
The script needs to run on the machine hosting the MySql database (It is a linux machine).
The backups must be saved onto the same server that the database is on.
A backup needs to be made every 30 minutes.
When a backup is older than a week it is deleted unless it is the very first backup created that week. i.e out of these backups backup_1_12_2010_0-00_Mon.db, backup_1_12_2010_0-30_Mon.db, backup_1_12_2010_1-00_Mon.db ... backup_7_12_2010_23-30_Sun.db etc only backup_1_12_2010_0-00_Mon.db is kept.
Anyone have anything similar or any ideas where to start?
Answer: A cron
Description:
Try creating a file something.sh with this:
#!/bin/sh
mysqldump -u root -p pwd --opt db1.sql > /respaldosql/db1.sql
mysqldump -u root -p pwd --opt db2.sql > /respaldosql/db2.sql
cd /home/youuser/backupsql/
tar -zcvf backupsql_$(date +%d%m%y).tgz *.sql
find -name '*.tgz' -type f -mtime +2 -exec rm -f {} \;
Give the adequate permission to the file
chmod 700 mysqlrespaldo.sh
or
sudo chmod 700 something.sh
and then create a cron with
crontab -e
setting it like
**0 1 * * *** /home/youruser/coolscripts/something.sh
Remember that the numbers or '*' characters have this structure:
Minutes (range 0-59)
Hours (0-23)
Day of month (1-31)
Month (1-12)
Day of the week (0-6 being 0=Domingo)
Absolute path to script or program to run
You can also use the helper folder available in newer versions of linux distros, where you find /etc/cron.daily, /etc/cron.hourly, /etc/cron.weekly, etc. In this case, you can create a symlink to your script into the chosen folder and OS will take charge of running it with the promised recurrence (from a powerful comment by #Nick).
Create a shell script like the one below:
#!/bin/bash
mysqldump -u username -p'password' dbname > /my_dir/db_$(date+%m-%d-%Y_%H-%M-%S).sql
find /mydir -mtime +10 -type f -delete
Replace username, password and your backup directory(my_dir). Save it in a directory(shell_dir) as filename.sh
Schedule it to run everyday using crontab -e like:
30 8 * * * /shell_dir/filename.sh
This will run everyday at 8:30 AM and backup the database. It also deletes the backup which is older than 10 days. If you don't wanna do that just delete the last line from the script.
Doing pretty much the same like many people.
The script needs to run on the machine hosting the MySql database (It is a linux machine).
=> Create a local bash or perl script (or whatever) "myscript" on this machine "A"
The backups must be saved onto the same server that the database is on.
=> in the script "myscript", you can just use mysqldump. From the local backup, you may create a tarball that you send via scp to your remote machine. Finally you can put your backup script into the crontab (crontab -e).
Some hints and functions to get you started as I won't post my entire script, it does not fully do the trick but not far away :
#!/bin/sh
...
MYSQLDUMP="$(which mysqldump)"
FILE="$LOCAL_TARBALLS/$TARBALL/mysqldump_$db-$SNAPSHOT_DATE.sql"
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db > $FILE && $GZIP $GZ_COMPRESSION_LEVEL $FILE
function create_tarball()
{
local tarball_dir=$1
tar -zpcvf $tarball_dir"_"$SNAPSHOT_DATE".tar.gz" $tarball_dir >/dev/null
return $?
}
function send_tarball()
{
local PROTOCOLE_="2"
local IPV_="4"
local PRESERVE_="p"
local COMPRESSED_="C"
local PORT="-P $DESTINATION_PORT"
local EXECMODE="B"
local SRC=$1
local DESTINATION_DIR=$2
local DESTINATION_HOST=$DESTINATION_USER"#"$DESTINATION_MACHINE":"$DESTINATION_DIR
local COMMAND="scp -$PROTOCOLE_$IPV_$PRESERVE_$COMPRESSED_$EXECMODE $PORT $SRC $DESTINATION_HOST &"
echo "remote copy command: "$COMMAND
[[ $REMOTE_COPY_ACTIVATED = "Yes" ]] && eval $COMMAND
}
Then to delete files older than "date", you can look at man find and focus on the mtime and newer options.
Edit: as said earlier, there is no particular interest in doing a local backup except a temproray file to be able send a tarball easily and delete it when sent.
You can do most of this with a one-line cronjob set to run every 30 minutes:
mysqldump -u<user> -p<pass> <database> > /path/to/dumps/db.$(date +%a.%H:%M).dump
This will create a database dump every 30 minutes, and every week it'll start overwriting the previous week's dumps.
Then have another cronjob that runs once a week that copies the most recent dump to a separate location where you're keeping snapshots.
After a brief reading the question and the good answers i would add few more points. Some of them are mentioned already.
The backup process can involve next steps:
Create a backup
Compress the backup file
Encrypt the compressed backup
Send the backup to a cloud (DropBox, OneDrive, GoogleDrive, AmazonS3,...)
Get a notification about results
Setup a schedule to run the backup process periodically
Delete the old backup files
To compound a script to cover all the backup steps you need an effort and knowledge.
I would like to share a link to an article (i'm one of the writers) which describes the most used ways to backup MySQL databases with some details:
Bash script
# Backup storage directory
backup_folder=/var/backups
# Notification email address
recipient_email=<username#mail.com>
# MySQL user
user=<user_name>
# MySQL password
password=<password>
# Number of days to store the backup
keep_day=30
sqlfile=$backup_folder/all-database-$(date +%d-%m-%Y_%H-%M-%S).sql
zipfile=$backup_folder/all-database-$(date +%d-%m-%Y_%H-%M-%S).zip
# Create a backup
sudo mysqldump -u $user -p$password --all-databases > $sqlfile
if [ $? == 0 ]; then
echo 'Sql dump created'
else
echo 'mysqldump return non-zero code' | mailx -s 'No backup was created!' $recipient_email
exit
fi
# Compress backup
zip $zipfile $sqlfile
if [ $? == 0 ]; then
echo 'The backup was successfully compressed'
else
echo 'Error compressing backup' | mailx -s 'Backup was not created!' $recipient_email
exit
fi
rm $sqlfile
echo $zipfile | mailx -s 'Backup was successfully created' $recipient_email
# Delete old backups
find $backupfolder -mtime +$keep_day -delete
Automysqlbackup
sudo apt-get install automysqlbackup
wget https://github.com/sixhop/AutoMySQLBackup/archive/master.zip
mkdir /opt/automysqlbackup
mv AutoMySQLBackup-master.zip
cd /opt/automysqlbackup
tar -zxvf AutoMySQLBackup-master.zip
./install.sh
sudo nano /etc/automysqlbackup/automysqlbackup.conf
CONFIG_configfile="/etc/automysqlbackup/automysqlbackup.conf"
CONFIG_backup_dir='/var/backup/db'
CONFIG_mysql_dump_username='root'
CONFIG_mysql_dump_password='my_password'
CONFIG_mysql_dump_host='localhost'
CONFIG_db_names=('my_db')
CONFIG_db_exclude=('information_schema')
CONFIG_mail_address='mail#google.com'
CONFIG_rotation_daily=6
CONFIG_rotation_weekly=35
CONFIG_rotation_monthly=150
automysqlbackup /etc/automysqlbackup/automysqlbackup.conf
Third party tools
Hope it would be helpful!
My preference is for AutoMySQLBackup which comes with Debian. It's really easy and creates daily backups, which can be configured. As well, it stores on weekly and then one monthly backup as well.
I have had this running for a while and it's super easy to configure and use!
You might consider this Open Source tool, matiri, https://github.com/AAFC-MBB/matiri which is a concurrent mysql backup script with metadata in Sqlite3. Features (more than you were asking for...):
Multi-Server: Multiple MySQL servers are supported whether they are co-located on the same or separate physical servers.
Parallel: Each database on the server to be backed up is done separately, in parallel (concurrency settable: default: 3)
Compressed: Each database backup compressed
Checksummed: SHA256 of each compressed backup file stored and the archive of all files
Archived: All database backups tar'ed together into single file
Recorded: Backup information stored in Sqlite3 database
Full disclosure: original matiri author.