I'm querying Microsoft SQL Server 2008 with Flask-SQLAlchemy (.16) and SQL Alchemy (0.8.2) in python 2.7.
When I attempt to query the varchar(max) column. It is truncating it to 4096 characters. I've tried different data types in the code. String, Text, VARCHAR.
Any thoughts to get my code to pull all the data from my column?
Here is part of the code:
from web import db
class DynamicPage(db.Model):
__tablename__ = 'DynamicPage'
DynamicPageId = db.Column(db.Integer, primary_key=True)
PageHtml = db.Column(db.VARCHAR)
And the query:
pages = DynamicPage().query.all()
Are you using ODBC? FreeTDS? ODBC has a fixed maximum size for large text/binary fields. In FreeTDS you need to set the text size setting to support as large a field as you need.
Related
I am creating an app which performs raw queries across different databases and I am struggling with list parameters (IN).
I use SQLAlchemy for performing these queries.
I want to perform a query that accepts list parameter and that parameter might be NULL, which means I don't have to filter by field.
from sqlalchemy import create_engine, text
SQL = """SELECT group, count(1) cnt
FROM some_table
WHERE group IN :groups OR :groups IS NULL
GROUP BY group
"""
params = {'groups': ('group1', 'group2')}
engine = create_engine(connection_string)
query = text(SQL).bindparams(**params)
cursor = engine.execute(query)
Currently I'm testing it on PostgreSQL, MySQL and SQLite, but in production mode it is also supposed to work with SQL Server and Oracle.
The code above works only on PostgreSQL, however if I change params with None
params = {'groups': None}
The code wouldn't work on any databases.
Is there workaround for this problem?
I understand that solution might be specific for each RDBMS.
I am importing data into my Python3 environment and then writing it to a MySQL database. However, there is a lot of different data tables, and so writing out each INSERT statement isn't really pragmatic, plus some have 50+ columns.
Is there a good way to create a table in MySQL directly from a dataframe, and then send insert commands to that same table using a dataframe of the same format, without having to actually type out all the col names? I started trying to call column names and format it and concat everything as a string, but it is extremely messy.
Ideally there is a function out there to directly handle this. For example:
apiconn.request("GET", url, headers=datheaders)
#pull in some JSON data from an API
eventres = apiconn.getresponse()
eventjson = json.loads(eventres.read().decode("utf-8"))
#create a dataframe from the data
eventtable = json_normalize(eventjson)
dbconn = pymysql.connect(host='hostval',
user='userval',
passwd='passval',
db='dbval')
cursor = dbconn.cursor()
sql = sqltranslate(table = 'eventtable', fun = 'append')
#where sqlwrite() is some magic function that takes a dataframe and
#creates SQL commands that pymysql can execute.
cursor.execute(sql)
What you want is a way to abstract the generation of the SQL statements.
A library like SQLAlchemy will do a good job, including a powerful way to construct DDL, DML, and DQL statements without needing to directly write any SQL.
I am in the process of migrating databases from sqlite to mysql. Now that I've migrated the data to mysql, I'm not able to use my sqlalchemy code (in Python3) to access it in the new mysql db. I was under the impression that sqlalchemy syntax was database agnostic (i.e. the same syntax would work for accessing sqlite and mysql), but this appears not to be the case. So my question is: Is it absolutely required to use a DBAPI in addition to Sqlalchemy to read the data? Do I have to edit all of my sqlalchemy code to now read mysql?
The documentation says: The MySQL dialect uses mysql-python as the default DBAPI. There are many MySQL DBAPIs available, including MySQL-connector-python and OurSQL, which I think means that I DO need a DBAPI.
My old code with sqlite successfully worked like this with sqlite:
engine = create_engine('sqlite:///pmids_info.db')
def connection():
conn = engine.connect()
return conn
def load_tables():
metadata = MetaData(bind=engine) #init metadata. will be empty
metadata.reflect(engine) #retrieve db info for metadata (tables, columns, types)
inputPapers = Table('inputPapers', metadata)
return inputPapers
inputPapers = load_tables()
def db_inputPapers_retrieval(user_input):
result = engine.execute("select title, author, journal, pubdate, url from inputPapers where pmid = :0", [user_input])
for row in result:
title = row['title']
author = row['author']
journal = row['journal']
pubdate = row['pubdate']
url = row['url']
apa = str(author+' ('+pubdate+'). '+title+'. '+journal+'. Retrieved from '+url)
return apa
This worked fine and dandy. So then I tried to update it to work with the mysql db like this:
engine = create_engine('mysql://snarkshark#localhost/pmids_info')
At first when I tried to run my sample code like this, it complained because I didn't have MySqlDB. Some googling around informed me that MySqlDB does NOT work for Python 3. So then I tried pip installing pymysql and changing my engine statement to
engine = create_engine('mysql+pymysql://snarkshark#localhost/pmids_info')
which also ends up giving me various syntax errors when I try to adjust things.
So what I want to know, is if there is any way I can get my current syntax to work with mysql? Since the syntax is from sqlalchemy, I thought it would work perfectly for the exact same data in mysql that was previously in sqlite. Will I have to go through and update ALL of my db functions to use the syntax of the DBAPI?
This will sound like a dumb answer, but you'll need to change all the places where you're using database-specific behavior. SQLAlchemy does not guarantee that anything you do with it is portable across all backends. It leaks some abstractions on purpose to allow you to do things that are only available on certain backends. What you're doing is like using Python because it's cross-platform, then doing a bunch of os.fork()s everywhere, and then being surprised that it doesn't work on Windows.
For your specific case, at a minimum, you need to wrap all your raw SQL in text() so that you're not affected by the supported paramstyle of the DBAPI. However, there are still subtle differences between different dialects of SQL, so you'll need to use the SQLAlchemy SQL expression language instead of raw SQL if you want portability. After all that, you'll still need to be careful not to use backend-specific features in the SQL expression language.
Environment:
Ubuntu 16.04, Asp.Net Core 1.1, MySql.Data 7.0.6-IR31, MySql.Data.EntityFrameworkCore 7.0.6-IR31
The MySql database column in question is of data type "mediumtext." Here is my pseudo-code:
string qry = "UPDATE MyDb.MyTbl SET Comments = #p0 WHERE ID = #p1";
string comments = "a long long string";
using(var db = new AppDbContext()) {
var numRecords = db.Database.ExecuteSqlCommand(qry, comments, id);
return numRecords;
}
When executed, the database table gets updated as expected. However, only the first 255 characters are being written into my "Comments" column.
Wondering if anyone can suggest a workaround.
Instead of using entity framework, I switched to using plain old MySqlConnection/MySqlCommand classes. Inserts and updates seem to work now. I guess the bug is in MySql EF layer.
I have a JDO Class. Some of the attributes are as shown below:
#Column(jdbcType = "VARCHAR", length = 200)
String anotherSrcFieldValue;
#Column(jdbcType = "BIGINT")
long tgtFieldId;
#Column(jdbcType = "VARCHAR", length = 200)
String tgtFieldValue;
With MySQL and MSSQL it works fine.
My requirement is, if it is MySQL make it a column of type VARCHAR; and when it is MSSQL, create a column of type NVARCHAR. How can I achieve this?
A second requirement is one entity class to be run on both the databases.
All JDO docs I've seen explain clearly that putting schema specific info in annotations is a bad idea. Consequently you should have 2 files "package-mysql.orm" and "package-mssql.orm" to specify the schema-specific parts of the mapping, and set "datanucleus.Mapping" to be either "mysql" or "mssql" depending on your datastore. As per http://www.datanucleus.org/products/accessplatform_4_2/jdo/orm/metadata_orm.html