I recently picked up a task at a lab I volunteer at and my PI said their experiment wasn't working because no workers were able to complete the task...
My hypothesis is that their sqlite implementation doesn't allow for proper recording of experimental data due to sqlite's ineffectiveness at concurrent operations (as stated in the psiturk documentation).
My question is, how can I properly set up a mysql database to work with their already made experiment?
I created a new database called "particpants" from the mysql interpreter. Then I started the mysql server successfully...
Next, I changed the database_url in the "config.txt" file from being equal to sqlite://participants.db (a local database file) to being equal to mysql://aweeeezy#localhost:3306/participants, but I can not connect to the database when I try to start the psiturk server.
I also tried mysql://aweeeezy#localhost/particpants...I can't figure out how to format this database_url string so that the experiment works with mysql, and I haven't found anything helpful when searching through mysql related posts and/or psiturk related posts.
Please help a databases noob!
The format for the database_url field of config.txt using MySQL needs to be mysql://username:password#hostname/database. psiTurk 2.0 depends on SQLAlchemy and some docs about that are here: http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html
Related
Does anyone know how to migrate from mysql db to postgre db in Google Cloud SQL ?
I tried browsing the web put I can't really find any instructions how to accomplish this
The Data Migration service only enables you to upgrade major version within same db but not to switch to different db
According to the documentation, DMS currently supports only homogeneous database migration 1 here’s a link for best practices 2.
There are currently no other Google Cloud tools to do the MySQL to PostgreSQL migration as you are looking for.
Nevertheless, in order to do the MySQL to PostgreSQL migration, a conversion would be necessary as the Databases are not entirely similar.
There is a possible workaround in stackoverflow link that shares multiple solutions to do a conversion, please keep in mind that the information is supported by the community meaning Google Cloud Platform cannot vouch for it.
With the aforementioned, you have two options in order to do the migration. In the first one, you would need to follow the next steps:
1.- Do an export of your data in a specific format (dump file or csv) as the documentation mentions 4.
2.- Do the conversion of the data in order to have the right format (Postgresql) 3.
3.- Do the import of the data as the documentation mentions 5.
On the other hand, the second solution could be using the 3rd party tool “pgloader” 6,7 that may help you with the migration.
The old versions of our product allowed to capture the current state of the system in a single archive file, which also contains the MySQL database files - lots of <XXX.frm, XXX.myd, XXX.myi> triples .
Now we have the next generation of the product, which does not do anything stupid like capturing the database files, but it must know to read the archives produced by the old versions.
Our product is a commercial closed source product, but it is not very expensive. We had to stop using MySQL, because of the second reason (Oracle has changed the MySQL licensing) and we cannot use MariaDB, because of the first one (their licensing freaked the s*t out of the company lawer).
So, my question is there another way to read these MySQL database files? A commercial light weight solution is fine - after all, we are talking about read-only exploration of the database files. Free/Open Source alternatives are welcome too, as long as they do not mean that the code using them must be Open Source too.
Thanks.
EDIT
Besides the issue whether I can or cannot continue using the old version of MySql to read the old MySql database files, the question remains how can I read them? I mean, MySql is no longer our database, so even if I can bundle with the old MySql implementation, do I have to install the full blown database engine to just read the files? I'd rather avoid that.
If you want to go thru tables structure it would be enough to read the following links.
MySQL internals (all), File Format, MyISAM
If it is not enough and you database size less then 10G you can use Ms SQL Server Express (which is free with DB less than 10G. Page to compare different versions of Ms SQL Server is here). Search for the way to convert MySQL files to Ms SQL Server. Here is the first link a got from Bing: link1 (I suppose not all of them need MySQL server)
If it is not suitable. You can try another MySQL forks like: XtraDB, OurDelta, Drizzle, PBX and so on.
Hope you will find something useful.
We have found a solution. Unfortunately, it involves MySQL, so there are potential licensing issues. Here it is - http://dev.mysql.com/doc/refman/5.0/en/libmysqld.html
All it takes is download the MySql source code and help yourself with:
libmysqld.dll
libmysqld.lib
header files from the include folder
Then it is possible to read the files using the embedded MySQL database engine inside libmysqld.dll.
I'm considering a MySQL to Postgresql migration for my web application, but I'm having a really hard time converting my existing MySQL database to Postgresql.
I tried :
mysldump with --compatible=postgresql
migration wizard from EnterpriseDB
Postgresql Data Wizard from EMS
DBConvert from DMSoft
and NONE of the above programs do a good job converting my database!
I saw some Perl and Python scripts for converting mysql to postgresql, but I can't figure out how to use them....(I installed ActivePerl and don't understand what I'm supposed to do next to run that script!)
I use Auto Increment fields (as a primary key) all the time, and these are just ignored... I understand that Postgresql does auto-increments in another way (with sequences), but it can't be THAT hard for MIGRATION software to implement that, or is it?
Did anybody have better luck converting a MySQL database with auto-increments as primary keys?
I know this is probably not the answer you are looking for, but: I don't believe in "automated" migration tools.
Take your existing SQL Scripts that create your database schema, do a search and replace for the necessary data types (autonumber maps to serial which does all the sequence handling automagically for you), remove all the "engine=" stuff and then run the new script against Postgres.
Dump the old database into flat files and import them into the target.
I have done this several times with sample databases that were intended for MySQL and it really doesn't take that long.
Probably just as long as trying all the different "automated" tools.
Why not use an ETL Tool? you dont have to worry about dumps or stuff like that.
I have migrated to PostgresSQL and MySQL and have had no problems with the auto increment fields.
You just need to know the connection credentials and thats it. I personally use Pentaho ( it's open source ).
Download Pentaho ETL from http://kettle.pentaho.org/
Unzip and run Pentaho (using .bat file spoon.bat)
Create a new Job:
Create DB connection for source data base (PostgreSQL) - using menu: Tools→Wizard→Create DataBase Connection (F3) Create DB connection for destination data base (Mysql) - using technique described above.
Run the Wizard: Tools → Wizard → Copy Tables (Ctrl-F10).
Select source (left dialog panel), and destination (left dialog panel). Click Finish.
The Job will be generated - Run the job.
If you need any help let me know.
Even when you familiar with all "PostgreSQL gotchas", doing every step by hand may take a lot of time, especially when your db is "big".
Try some other scripts/tools.
I know this is an old question but I just ran into the same problem migrating from MySQL to Postgres. After trying several migration tools out the very best one I could find, which will migrate your database structure as cleanly as possible, was Pgloader https://github.com/dimitri/pgloader/ it will take care of changing the Auto Increment to Postgres sequences no problem and it's super fast.
I am trying to replicate a Progress database to MySQL 5.1..now , I came across a few softwares and a few suggestions on stackoverflow as well as other websites which necessitate the involvement of a Software like Pro2SQL or other SQL migration tools like MySQL migration tool.But the problem that i am faced with is that I will be using Linux to run the mysql.i am working on linux.Is there a software for linux(I am using bash scripting to query the MYSQL database) or another other means?
Currently , I am using jdbc to connect and retrieve, but mapping the database is hard and may create flaws in the long run due to mapping problems.Also, this proccess will be repeated quiet often..for backup.
Since, MySQL migration tool is a good solution , but it doesnt support linux command prompt, so I have to implement in another better / optimized way..Please suggest what should be done further.Thanks a ton for the support..
If it is just about dumping :
If i get your problem the solution holds in two lines (If you are following SQL standards) :
pg_dump <yourdatabase>
mysql < <yourfile.sql>
With the first line you are dumping your database, many options exists whether you want to dump tables, content, schemas, etc... go to the man page for more details
With the second lines you are just loading them into your mysql.
If it is about mapping :
Take a look a Kettle, it's an Open Source ETL, it works really well on Linux and you can automize task using crontabs.
Hope I could help,
I'm working with another dev and together we're building out a MySQL database. We've each got our own local instances of MySQL 5.1 on our dev machines. We've not yet been able to identify a way for us to be able to make a local schema change (eg: add a field and some values for that field) and then export some kind of script or diff file that the other can import in. I've looked into Toad and Navicat's synchronization features but they seem oriented towards synchronizing between two instances, not an instance and an intermediate file. We thought MySQL Workbench would be great but this but the synchronization feature just seems plain broken. Any other ideas? How do you collaborate with others on the schema?
First of all put your final SQL schema into version control. So you'll always have a version of it with all changes. It can be a plain SQL file. Every developer in the team can use it as starting point to created his copy database. All changes must be applied to it. This will help you to find conflicts faster.
Also I used such file to create a test database to run unit-tests after each submit. So we were always sure that production code is working.
Then you can use any migration tool to move changed between developers. Here is similar question about this:
Mechanisms for tracking DB schema changes
If you're using PHP then look at Doctrine migrations.