I am trying to use Composer for my Wordpress workflow and was wondering if there is a way for Composer to grab a MySQL database from my S3 bucket? The idea here is that I want to develop Wordpress websites locally starting from a backup copy of a database. Was hoping to find a way to automate this through Composer.
You can define custom scripts for composer, and tell it what you want to do.
For example, if you wanted to pull it from S3 and import it using the mysql command line utility, you could add something like this to composer.json:
"scripts": {
"refresh-db": "aws s3 cp s3://my-bucket/db-dump.sql /tmp/db-dump.sql && mysql -hlocalhost -uroot my_db_name < /tmp/db-dump.sql"
}
Then run composer refresh-db to execute it.
Related
I have been trying to deploy my app on heroku.
I first used the command git push heroku master
and then running the node server using heroku run node server.js
However I get this error. Can anyone help me with this?
enter image description here
I've run into this error before using ibm db2... what you have to do is make a new directory locally, and login to Heroku using git in that new directory. once you do that, copy each folder and file into your new directory from your old one, (you can cp ../olddirectory/app.js,etc, etc)including the package.json and package-lock.json. once you do all that, push it, and it should work.
This sounds so stupid, but it works. every. freaken. time. I do this with all my projects as they all use db2. let me know if you need any more help.
I would like to create a Dockerfile in order to create a container that has already mysql installed and my databases created.
I have an sql folder that contains my *.sql files and a script folder that contains my db_builder.sh script that does all the work I need (create the databases, import the needed sql files, etc...).
The only thing I'm missing is to run the mysql server before the db_builder.sh script runs. Also I need to know what would be the default password of the root user.
FROM ubuntu:18.04
ADD sql src/sql
ADD scripts src/scripts
RUN apt-get update && apt-get install mysql-server -y
# somehow start mysql ???
RUN src/scripts/db_builder.sh
I solved my issue by:
1) creating the Dockerfile FROM MySQL image instead of Ubuntu image
2) splitting my db_builder.sh into two scripts:
- prepare_sql_files.sh -> which prepares the needed sql files to be imported
- db_import.sh -> which actually does the import
3) set RUN the prepare_sql_files.sh in the Dockerfile, while just placing (ADD) the db_import.sh in /docker-entrypoint-initdb.d because of this feature of the mysql docker image:
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
So my dockerfile now looks like this:
FROM mysql:latest
ADD sql /src/sql
ADD scripts /src/scripts
RUN /src/scripts/prepare_sql_files.sh
ADD /src/scripts /docker-entrypoint-initdb.d
Currently I try to migrate a typo3 based Webserver to a new machine. (its my first migration, so please don't judge if I did smth wrong).
What I did so far:
transfer Files via wget on new machine
create dbdump with mysqldumb
transfer dump with wget
create database with mysql source <dumpfile.sql>
create user with access to the db
When I try to connect with the server, typo3 doesn't response.
And when I try to install typo3 from skretch and replace the new database with the old one, I also run into internal server errors.
Is there a solution on how to migrate the database correctly?
Yours Sincerely,
Sebastian
Mh,
this should not be an issue in general.
We often use following steps:
[SRC] BackupDatabase: MYSQL_PWD="DBPASS" mysqldump -uDBUSER --opt -e -Q --skip-comments --single-transaction=true | gzip >dump.sql.gz
[SRC] Pack the installation and the used core: tar -czf transfer.tar.gz ./typo-webfolder ./typo3_src-VERSION
Transfer both .gz files to new server (wget, scp, ftp etc )
[NEW] Deflate files: tar -xzf transfer.tar.gz
[NEW] Create a empty database, using your fav tool
[NEW] Import database: gunzip <dump.sql.gz | MYSQL_PWD="DBPASS" mysql -uDBUSER [-hDBHOST] NEWDBNAME
[NEW] Adjust DatabaseCredentials in `typo3conf/LocalConfiguration.php'
[NEW] Recheck symlinks (typo3_src, typo3, index.php)
[NEW] Recheck .htaccess files - maybe missed to pack and transfer ?
[NEW] Create FlagFile touch typo-webfolder/typo3conf/ENABLE_INSTALL_TOOL
[NEW] Open install tool in Webbrowser ( http://newdomain.tld/typo3/install ), checking requirements, maybe fixing folderstructure and so on, clearing all caches
Eventually clear the typo3temp folder (can be repopulated by the system)
In our projects, we are setting the DB Credentials through AdditionalConfiguration.php based on Enviroment Variables (read from a .env file )
So in generell there should not be any issues, but withour more information it is hard to help you further.
Some things:
Proxy/TrustedProxy settings
DomainRecord Settings in the Database ( sys_domain )
RealUrl Config With DomainName based settings
.htaccess Canonical rewrite rules based on domain/hostname
Missing needed php modules etc., wrong php version, checking php error log
in general your workflow is usable. (don't forget the filesystem fileadmin/ and typo3conf/ext/)
but there are some traps.
be sure to delete the corresponding caches for all changes in filesystem or database.
if you transfer the database: make sure you always use UTF-8 coding of everything!
regarding filesystem: there could be thumbnails or other resized images (folder __processed__/) but there also are entries in the database for each file and each resizing.
all extensions or configuration are cached in typo3temp/Code/*, also have in mind the autoloader files.
in most cases you can do a clean-up in the install tool.
so the first thing should be:
start the install tool, do all checks and remove all temporary information.
i am new to laravel i just figured out how to install composer laravel etc etc on my local machine MAMP on windows , Now i am confuse with the command on terminal which is
C:\project>mysql -uroot -proot
'mysql' is not recognized as an internal or external command,
operable program or batch file.
How can i fix this ?
setting Environment will solve the issue
Go to Control Panel -> System -> Advanced
Click - Environment Variables
Go to- System Variables find PATH and click on it.
add the path to your mysql\bin folder to the end paths. (ex: E:\xampp\mysql\bin) and add ; end of the line
Close all the command prompts you opens.
Re open and try it.
Setting the PATH to point to the MYSQL bin folder is normally the first thought, but I find that dangerous as things get left lying around when you change software.
I normally create a little batch file in the projects folder or in a folder that it already on your PATH, like this
mysqlpath.cmd
echo off
PATH=C:\mamp\path\to\the\mysql\bin;%PATH%
mysql -v
The mysql -v will output the mysql version number but it is a handy way of knowing that the correct folder has been added to the PATH. This adds the folder to the PATH but only for the life of the command window.
Then just run this from any command window when you want to use MYSQL from the command line
> phppath
You may also like to create one for PHP as well
phppath.cmd
echo off
PATH=C:\mamp\path\to\the\php\;%PATH%
php -v
I have tried to setup a simple cron job running on openshift but once I have pushed the file to openshift and then login and search for the file it does not seem to be there and there is not log output.
I created an application from: https://github.com/smarterclayton/openshift-go-cart
I then installed the cron 1.4 cartridge.
I created a file at .openshift/cron/minutely/awesome_job and set it as 755
I added the following contents:
#! /bin/bash
date > $OPENSHIFT_LOG_DIR/last_date_cron_ran
I pushed to the server
Logged in via ssh and run find /var/lib/openshift/53760892e0b8cdb5e9000b22 -name awesome_job for which it finds nothing.
Any ideas which might help as I am at loss why is it not working.
Make sure the execution bit is set on your cron file.
The issue was not with cron but with the golang cartridge I was using which was removing the .openshift directory.
https://github.com/smarterclayton/openshift-go-cart/issues/10
You should also put a file named "jobs.allow" under your .openshift/cron/minutely/. So your cron jobs will be executed.
For your ref: https://forums.openshift.com/daily-cron-jobs-not-getting-triggered-automatically
And the reason you can find your awesome_job vis ssh login is because it is under /var/lib/openshift/53760892e0b8cdb5e9000b22/app-root/runtime/repo/.openshift, so the command find does not search any files under folders named with . prefixed.