Turbogears nostests results in OperationalError when using SQLAlchemy-FullText-Search - sqlalchemy

I recently installed SQLAlchemy-FullText-Search dependency in (https://github.com/mengzhuo/sqlalchemy-fulltext-search), but since that I get unexpected results. When I run
nosetests -v
An OperationalError: (OperationalError) near "(": syntax error u'ALTER TABLE opportunity ADD FULLTEXT (title, content, requirements)' () comes out.
I'm defining fulltext_columns:
__fulltext_columns__ = ('title', 'content', 'requirements')

It looks like SQLAlchemy-FullText-Search does only work on MySQL databases - but the default test suite of a TurboGears 2 webapp is using an SQLite in-memory database, so that might be the problem.

Related

sqlalchemy.exc.ProgrammingError: Unexpected 'UNIQUE' error when using alembic with snowflake

I'm trying to use alembic with snowflake to version control the schema I use for both PostgreSQL and snowflake. I keep running into this Unexpected 'UNIQUE' error. I know this is being caused as it is trying to create an index, something that snowflake does not support. This is strange to me as I thought the purpose of the dialect system in SQLAlchemy was to manage differences between implementations and stop it trying to create this index when it's not supported.
I followed the guide on the snowflake website to add the dialect to alembic, calling the upgrade function like this:
engine = create_engine(SNOWFLAKE_DATABASE_URI)
with engine.begin() as con:
logger.info("Starting db upgrade.")
cfg = Config("migrations/alembic.ini")
cfg.attributes["connection"] = con
cfg.attributes["configure_logger"] = False
command.upgrade(cfg, "head")
Is the connector not working properly or am I not calling this in the correct way?

For django testing, how do I use keepdb with mariadb

I have a database with a lot of nonmanaged tables which I'm using for a django app. For testing I'm wanting to use the --keepdb option so that I don't have to repopulate these tables every time. I'm using MariaDB for my database. If I don't use the keepdb option everything works fine, the test database gets created and destroyed properly.
But when I try to run the test keeping the database:
$ python manage.py test --keepdb
I get the following error:
Using existing test database for alias 'default'...
Got an error creating the test database: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'CREATE DATABASE IF NOT EXISTS test_livedb ;\n SET sql_note' at line 2")
I assume that this is an issue with a different syntax between MariaDB and MySQL. Is there anyway to get the keepdb option to work with MariaDB?
thanks very much!
For what it's worth: This bug was introduced in Django 2.0.0 and fixed in Django 2.1.3 (https://code.djangoproject.com/ticket/29827)
Two things - check out Factory Boy (for creating test data) and I would suggest checking out Pytest as well. With non-managed tables, the issue I think you'll run into is that (at least in my experience) django won't create them in the test environment and you end up running into issues because there is no migration file to create those tables (since they're unmanaged). Django runs the migration files when creating the test environment.
With Pytest you can run with a --nomigrations flag which builds your test database directly off the models (thus creating the tables you need for your unmanaged models).
If you combine Pytest and Factory Boy you should be able to come up with the ability to setup your test data so it works as expected, is repeatable and testable without issue.
I actually approach it like this (slightly hacky, but it works with our complex setup):
On my model:
class Meta(object):
db_table = 'my_custom_table'
managed = getattr(settings, 'UNDER_TEST', False)
I create the UNDER_TEST variable in settings.py like this:
# Create global variable that will tell if our application is under test
UNDER_TEST = (len(sys.argv) > 1 and sys.argv[1] == 'test')
That way - when the application is UNDER_TEST the model is marked as managed (and Pytest will create the appropriate DB table). Then FactoryBoy handles putting all my test data into that table (either in setUp of the test or elsewhere) so I can test against it.
That's my suggestion - others might have something a little more clear or cleaner.

Django-MySQL is unable to recognise model.column in queryset extra?

I have SQLite and MySQL installed on my local and development machine respectively. Following is working fine on my local machine(with SQLite):
select_single = {'date': "strftime('%%Y-%%m-%%d',projectName_Modelname.created)"}
queryset.extra(select=select_single)
But since strftime doesn't work with MySQL(link), I tried using DATE_FORMAT() as suggested in given link and other places too.
Though now when I execute below:
select_single = {'date': "DATE_FORMAT(projectName_Modelname.created, '%%Y-%%m-%%d')"}
queryset.extra(select=select_single)
Following error comes:
DatabaseError: (1054, "Unknown column 'projectName_Modelname.created' in 'field list'")
where 'created' is Datetime field in Django model 'Modelname' of app 'projectName'
To debug when I replace projectName_Modelname.created with NOW() no error comes. I have also tried just Modelname.created instead of projectName_Modelname.created though with no benefit?
Note: I am using Django1.5.5
I think it should be something like:
date_raw_query = {'date': "date_format(created, '%%Y-%%m-%%d')"}
and then try
queryset.extra(select=date_raw_query)
Hope that works in your setup. I have tried this on Django 1.7 and MySQL and seems to be working.
Also remember that if SQL errors start coming up, you can always do a print queryset.extra(select=date_raw_query).query to see what might be going wrong.
And when it comes to writing compatible code between SQLite and MySQL like this one, writing a custom MySQL function has been suggested here
But I would suggest otherwise. It's better to have a similar dev environment with MySQL setup in local and also, upgrade Django as soon as possible. :P

Bind parameters in Rails mySQL empty

Using Rails 3.1.1, I'm getting occasional errors in production where it seems like the bind parameters on a mysql query are not there for some reason. The error looks like this:
A ActiveRecord::StatementInvalid occurred in events#show:
Mysql::Error: : SELECT `events`.* FROM `events` WHERE `events`.`id` = ? LIMIT 1
activerecord (3.1.1) lib/active_record/connection_adapters/mysql_adapter.rb:890:in `execute
It's not consistent on any insert or select, so I'm having trouble tracking it down. Does anybody have any suggestions?
Edit: updated with simpler example.
#events_controller.rb
def show
#event = Event.find(params[:id])
...
end
#called with parameters: {"action"=>"show", "controller"=>"events", "id"=>"26"}
The probable reason
Check your database driver installation here.
This seems to have gone away after upgrading to the latest rails - uncertain as to what it was.

MySql to PostgreSql migration

My PostgreSQL is installed on Windows. How can I migrate data from MySQL database to PostgreSQL?
I've read tons of aricles. Nothing helps :(
Thanks.
My actions:
mysql dump:
mysqldump -h 192.168.0.222 --port 3307 -u root -p --compatible=postgresql synchronizer > c:\dump.sql
create db synchronizer at pgsql
import dump:
psql -h 192.168.0.100 -d synchronizer -U postgres -f C:\dump.sql
output:
psql:C:/dump.sql:17: NOTICE: table "Db_audit" does not exist, skipping
DROP TABLE
psql:C:/dump.sql:30: ERROR: syntax error at or near "("
СТРОКА 2: "id" int(11) NOT NULL,
^
psql:C:/dump.sql:37: ERROR: syntax error at or near ""Db_audit""
СТРОКА 1:LOCK TABLES "Db_audit" WRITE;
^
psql:C:/dump.sql:39: ERROR: relation "Db_audit" does not exist
СТРОКА 1:INSERT INTO "Db_audit" VALUES (4068,4036,4,1,32,'2010-02-04 ...
^
psql:C:/dump.sql:40: ERROR: relation "Db_audit" does not exist
СТРОКА 1:INSERT INTO "Db_audit" VALUES (19730,2673,2,2,44,'2010-11-23...
^
psql:C:/dump.sql:42: ERROR: syntax error at or near "UNLOCK"
СТРОКА 1:UNLOCK TABLES;
^
psql:C:/dump.sql:48: NOTICE: table "ZHNVLS" does not exist, skipping
DROP TABLE
psql:C:/dump.sql:68: ERROR: syntax error at or near "("
СТРОКА 2: "id" int(10) unsigned NOT NULL,
^
psql:C:/dump.sql:75: ERROR: syntax error at or near ""ZHNVLS""
СТРОКА 1:LOCK TABLES "ZHNVLS" WRITE;
^
psql:C:/dump.sql:77: WARNING: nonstandard use of escape in a string literal
СТРОКА 1:...???????? ??? ???????','10','4607064820115','0','','??????-??...
^
ПОДСКАЗКА: Use the escape string syntax for escapes, e.g., E'\r\n'.
Cancel request sent
psql:C:/dump.sql:77: WARNING: nonstandard use of escape in a string literal
СТРОКА 1:...??????????? ????????','10','4602784001189','0','','???????? ...
My experience with MySQL -> Postgresql migration wasn't really pleasant, so I'd have to second Daniel's suggestion about CSV files.
In my case, I recreated the schema by hands and then imported all tables, one-by-one, using mysqldump and pg_restore.
So, while this dump/restore may work for the data, you are most likely out of luck with schema. I haven't tried any commercial solutions, so see what other people say and... good luck!
UPDATE: I looked at the code the process left behind and here is how I actually did it.
I had a little different schema in my PostgreSQL db, so some tables were joined, some were split. This is why straightforward import was not an option and my case is probably more complex than what you describe and this solution may be an overkill.
For each table in PG database I wrote a query that selects the relevant data from MySQL database. In case the table is basically the same in both databases, and there are no joins it can be as simple as this
select * from mysql_table_name
Then I exported results of this query to XML, to do this you need to run it like this:
echo "select * from mysql_table_name" | mysql [CONNECTION PARAMETERS] -X --default-character-set=utf8 > mysql_table_name.xml
This will create a simple XML file with the following structure:
<resultset statement="select * from mysql_table_name">
<row>
<field name="some_field">field_value</field>
...
</row>
...
</resultset>
Then, I wrote a script, that produces INSERT statement for each row element in this XML file. The name of the table, where to insert the data was given as a command line parameter to this script. Python script, in case you need it.
These sql statements were written to a file, and then fed to psql like this:
psql [CONNECTION PARAMETERS] -f FILENAME -1
The only trick there was in XML -> SQL transformation is to recognize numbers, and unquote them.
To sum it up: mysql can produce query results as XML and you can use it.
It's a bit more complicated than that. There is plenty of documentation here:
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL
There, you'll also find conversion scripts.
In my rather simple case (30 tables, 10000 records), I used a perl script:
http://pgfoundry.org/frs/?group_id=1000198
It chugged through the mysql dump file and produced a pg dump file, with the following issues.
I was importing to Heroku so I used their pgbackups plugin which worked almost flawlessly.
Issues to watch for
Boolean data types. MySQL stores these as 0 and 1. PostGreSQL stores them as t and f. Watch that the booleans dont get migrated as integers.
Auto incrementing IDs. You may find your ids start counting again from 1. You'll get errors like this: "duplicate key value violates unique constraint ...". It's easy to fix, but watch out for it.
I've used py-mysql2pgsql for converting a big MySQL database into Postgres. It handles most cases very well. I had to patch it for couple of cases specific to my needs though.
https://pypi.python.org/pypi/py-mysql2pgsql
By default, it reads data from MySQL and writes to Postgres. But you can ask it to write the schema and/or data to a file for inspecting before loading into Postgres.
You can use https://github.com/mihailShumilov/mysql2postgresql
This is wroted on PHP convertor
There's also a very nice (fork of a) python converter that is maintained by the gitlab creators:
https://github.com/gitlabhq/mysql-postgresql-converter
The original fork is for this project is stale. For me, everything worked perfectly using this script.
Here there is a project which migrates in couple commands your current MySQL database to Postgresql including indexes, and foreign keys. Also it allows to define name, indexes and column type parsings so you can overwrite default behavior.
https://github.com/ggarri/mysql2psql
I hope it could be useful for anyone of you who is interested in migrating his current project to PG, in our case we obtained around 20% performance increase.
It is much better to use some program, that automates the process of migration.
Even if you familiar with all gotchas, doing every step by hand may take a lot of time, especially when your db is "big".
Try FromMySqlToPostgreSql.
This tool is feature-reach and easy to use.
It maps data-types, migrates constraints, indexes, PKs and FKs exactly as they were in your MySQL db.
Under the hood it uses PostgreSQL COPY, so data transfer is very fast.