SELECT results with wrong column order with PyMySQL - mysql

I'm executing a SQL "SELECT" query on a MySQL database via python, using PyMySQL as the interface. Below is the excerpt of the code which performs the task:
try:
with self.connection.cursor() as cursor:
sql = "SELECT `symbol`,`clordid`,`side`,`status` FROM " + tablename + " WHERE `tradedate` >= %s AND (`status` =%s OR `status`=%s)"
cursor.execute(sql,(str(begindate.date()),'I','T'))
a = cursor.fetchall()
The query executes just fine. The problem is that the column ordering of the results doesn't match the order specified within the query. If I run add the following code:
for b in a:
print b.values()
The values in variable 'b' appear in the following order:
'status', 'symbol', 'side', 'clordid'
Moreover, it doesn't matter which order is specified by me- the results always appear in this order. Is there any way to fix this? Thanks in advance!

In testing I found the selected answer (convert dict to OrderedDict) to be unreliable in preserving query result column order.
#vaultah's answer in a similar question suggests using pymysql.cursors.DictCursorMixin:
class OrderedDictCursor(DictCursorMixin, Cursor):
dict_type = OrderedDict
...to create a cursor that remembers the correct column order:
cursor = conn.cursor(OrderedDictCursor)
Then get your results like normal:
results = cursor.fetchall()
for row in results:
print row # properly ordered columns
I prefer this approach better because it's stable, requires less code, and handles ordering at the appropriate level (as the columns are read).

I amolst sure you need collections.OrderedDict, as each table row is a dict where keys stays for columns:
# python 2.7
import pymysql.cursors
from collections import OrderedDict
# ...
results = cursor.fetchall()
for i in results:
print OrderedDict(sorted(i.items(), key=lambda t: t[0]))
Also, based on your code snippet b.values() sounds like SQL ORDER BY col_name ASC|DESC. On this case SQL should be work pretty well.

Since you liked that solutuion
Here is an approach:
with self.connection.cursor() as cursor:
sql = "SELECT `symbol`,`clordid`,`side`,`status` FROM " + tablename + " WHERE `tradedate` >= %s AND (`status` =%s OR `status`=%s)"
cursor.execute(sql,(str(begindate.date()),'I','T'))
a = cursor.fetchall()
for b in a:
print "%s, %s, %s, %s" % (b["symbol"], b["clordid"], b["side"], b["status"])
I am not sure, if I should post this answer or to flag your OP to be closed as a duplicate.

Related

How can I use Django.db.models Q module to query multiple lines of user input text data

How would I go about using the django.db.models Q module to query multiple lines of input from a list of data using a <textarea> html input field? I can query single objects just fine using a normal html <input> field. I've tried using the same code as my input field, except when requesting the input data, I attempt to split the lines like so:
def search_list(request):
template = 'search_result.html'
query = request.GET.get('q').split('\n')
for each in query:
if each:
results = Product.objects.filter(Q(name__icontains=each))
This did not work of course. My code to query one line of data (that works) is like this:
def search(request):
template = 'search_result.html'
query = request.GET.get('q')
if query:
results = Product.objects.filter(Q(name__icontains=query))
I basically just want to search my database for a list of data users input into a list, and return all of those results with one query. Your help would be much appreciated. Thanks.
Based on your comments, you want to implement OR-logic for the given q string.
We can create such Q object by reduce-ing a list of Q objects that each specify a Q(name__icontains=...) constraint. We reduce this with a "logical or" (a pipe in Python |), like:
from django.db.models import Q
from functools import reduce
from operator import or_
def search_list(request):
template = 'search_result.html'
results = Product.objects.all()
error = None
query = request.GET.get('q')
if query:
query = query.split('\n')
else:
error = 'No query specified'
if query:
results = results.filter(
reduce(or_, (Q(name__icontains=itm.strip()) for itm in query))
)
elif not error:
error = 'Empty query'
some_context = {
'results' : results,
'error': error
}
return render(request, 'app/some_template.html', some_context)
Here we thus first check if q exists and is not the empty string. If that is the case, the error is 'No query specified'. In case there is a query, we split that query, next we check if there is at least one element in the query. If not, our error is 'Empty query' (note that this can not happen with an ordinary .split('\n'), but perhaps you postprocess the list, and for example remove the empty elements).
In case there are elements in query, we perform the reduce(..) function, and thus filter the Products.
Finally here we return a render(..)ed response with some_template.html, and a context that here contains the error, and the result.

How to obtain and process mysql records using Airflow?

I need to
1. run a select query on MYSQL DB and fetch the records.
2. Records are processed by python script.
I am unsure about the way I should proceed. Is xcom the way to go here? Also, MYSQLOperator only executes the query, doesn't fetch the records. Is there any inbuilt transfer operator I can use? How can I use a MYSQL hook here?
you may want to use a PythonOperator that uses the hook to get the data,
apply transformation and ship the (now scored) rows back some other place.
Can someone explain how to proceed regarding the same.
Refer - http://markmail.org/message/x6nfeo6zhjfeakfe
def do_work():
mysqlserver = MySqlHook(connection_id)
sql = "SELECT * from table where col > 100 "
row_count = mysqlserver.get_records(sql, schema='testdb')
print row_count[0][0]
callMYSQLHook = PythonOperator(
task_id='fetch_from_testdb',
python_callable=mysqlHook,
dag=dag
)
Is this the correct way to proceed?
Also how do we use xcoms to store the records for the following MySqlOperator?'
t = MySqlOperator(
conn_id='mysql_default',
task_id='basic_mysql',
sql="SELECT count(*) from table1 where id > 10",
dag=dag)
I was really struggling with this for the past 90 minutes, here is a more declarative way to follow for newcomers:
from airflow.hooks.mysql_hook import MySqlHook
def fetch_records():
request = "SELECT * FROM your_table"
mysql_hook = MySqlHook(mysql_conn_id = 'the_connection_name_sourced_from_the_ui', schema = 'specific_db')
connection = mysql_hook.get_conn()
cursor = connection.cursor()
cursor.execute(request)
sources = cursor.fetchall()
print(sources)
...your DAG() as dag: code
task = PythonOperator(
task_id = 'fetch_records',
python_callable = fetch_records
)
This returns to the logs the contents of your DB query.
I hope this is of use to someone else.
Sure, just create a hook or operator and call the get_records() method: https://airflow.apache.org/docs/apache-airflow/stable/_modules/airflow/hooks/dbapi.html

Django bulk update setting each to different values? [duplicate]

I'd like to update a table with Django - something like this in raw SQL:
update tbl_name set name = 'foo' where name = 'bar'
My first result is something like this - but that's nasty, isn't it?
list = ModelClass.objects.filter(name = 'bar')
for obj in list:
obj.name = 'foo'
obj.save()
Is there a more elegant way?
Update:
Django 2.2 version now has a bulk_update.
Old answer:
Refer to the following django documentation section
Updating multiple objects at once
In short you should be able to use:
ModelClass.objects.filter(name='bar').update(name="foo")
You can also use F objects to do things like incrementing rows:
from django.db.models import F
Entry.objects.all().update(n_pingbacks=F('n_pingbacks') + 1)
See the documentation.
However, note that:
This won't use ModelClass.save method (so if you have some logic inside it won't be triggered).
No django signals will be emitted.
You can't perform an .update() on a sliced QuerySet, it must be on an original QuerySet so you'll need to lean on the .filter() and .exclude() methods.
Consider using django-bulk-update found here on GitHub.
Install: pip install django-bulk-update
Implement: (code taken directly from projects ReadMe file)
from bulk_update.helper import bulk_update
random_names = ['Walter', 'The Dude', 'Donny', 'Jesus']
people = Person.objects.all()
for person in people:
r = random.randrange(4)
person.name = random_names[r]
bulk_update(people) # updates all columns using the default db
Update: As Marc points out in the comments this is not suitable for updating thousands of rows at once. Though it is suitable for smaller batches 10's to 100's. The size of the batch that is right for you depends on your CPU and query complexity. This tool is more like a wheel barrow than a dump truck.
Django 2.2 version now has a bulk_update method (release notes).
https://docs.djangoproject.com/en/stable/ref/models/querysets/#bulk-update
Example:
# get a pk: record dictionary of existing records
updates = YourModel.objects.filter(...).in_bulk()
....
# do something with the updates dict
....
if hasattr(YourModel.objects, 'bulk_update') and updates:
# Use the new method
YourModel.objects.bulk_update(updates.values(), [list the fields to update], batch_size=100)
else:
# The old & slow way
with transaction.atomic():
for obj in updates.values():
obj.save(update_fields=[list the fields to update])
If you want to set the same value on a collection of rows, you can use the update() method combined with any query term to update all rows in one query:
some_list = ModelClass.objects.filter(some condition).values('id')
ModelClass.objects.filter(pk__in=some_list).update(foo=bar)
If you want to update a collection of rows with different values depending on some condition, you can in best case batch the updates according to values. Let's say you have 1000 rows where you want to set a column to one of X values, then you could prepare the batches beforehand and then only run X update-queries (each essentially having the form of the first example above) + the initial SELECT-query.
If every row requires a unique value there is no way to avoid one query per update. Perhaps look into other architectures like CQRS/Event sourcing if you need performance in this latter case.
Here is a useful content which i found in internet regarding the above question
https://www.sankalpjonna.com/learn-django/running-a-bulk-update-with-django
The inefficient way
model_qs= ModelClass.objects.filter(name = 'bar')
for obj in model_qs:
obj.name = 'foo'
obj.save()
The efficient way
ModelClass.objects.filter(name = 'bar').update(name="foo") # for single value 'foo' or add loop
Using bulk_update
update_list = []
model_qs= ModelClass.objects.filter(name = 'bar')
for model_obj in model_qs:
model_obj.name = "foo" # Or what ever the value is for simplicty im providing foo only
update_list.append(model_obj)
ModelClass.objects.bulk_update(update_list,['name'])
Using an atomic transaction
from django.db import transaction
with transaction.atomic():
model_qs = ModelClass.objects.filter(name = 'bar')
for obj in model_qs:
ModelClass.objects.filter(name = 'bar').update(name="foo")
Any Up Votes ? Thanks in advance : Thank you for keep an attention ;)
To update with same value we can simply use this
ModelClass.objects.filter(name = 'bar').update(name='foo')
To update with different values
ob_list = ModelClass.objects.filter(name = 'bar')
obj_to_be_update = []
for obj in obj_list:
obj.name = "Dear "+obj.name
obj_to_be_update.append(obj)
ModelClass.objects.bulk_update(obj_to_be_update, ['name'], batch_size=1000)
It won't trigger save signal every time instead we keep all the objects to be updated on the list and trigger update signal at once.
IT returns number of objects are updated in table.
update_counts = ModelClass.objects.filter(name='bar').update(name="foo")
You can refer this link to get more information on bulk update and create.
Bulk update and Create

Insert or update if exists in mysql using pandas

I am trying to insert data from xlsx file into mysqdl table. I want to insert data in table and if there is a duplicate on primary keys, I want to update the existing data otherwise insert. I have written the script already but I realized it is too much work and using pandas it is quick. How can I achieve it in pandas?
#!/usr/bin/env python3
import pandas as pd
import sqlalchemy
engine_str = 'mysql+pymysql://admin:mypass#localhost/mydb'
engine = sqlalchemy.create_engine(engine_str, echo=False, encoding='utf-8')\
file_name = "tmp/results.xlsx"
df = pd.read_excel(file_name)
I can think of two options, but number 1 might be cleaner/faster:
1) Make SQL decide on the update/insert. Check this other question. You can iterate by rows of your 'df', from i=1 to n. Inside the loop for the insertion you can write something like:
query = """INSERT INTO table (id, name, age) VALUES(%s, %s, %s)
ON DUPLICATE KEY UPDATE name=%s, age=%s"""
engine.execute(query, (df.id[i], df.name[i], df.age[i], df.name[i], df.age[i]))
2) Define a python function that returns True or False when the record exists and then use it in your loop:
def check_existence(user_id):
query = "SELECT EXISTS (SELECT 1 FROM your_table where user_id_str = %s);"
return list(engine.execute(query, (user_id, ) ) )[0][0] == 1
You could iterate over rows and do this check before inserting
Please also check the solution in this question and this one too which might work in your case.
Pangres is the tool for this job.
Overview here:
https://pypi.org/project/pangres/
Use the function pangres.fix_psycopg2_bad_cols to "clean" the columns in the DataFrame.
Code/usage here:
https://github.com/ThibTrip/pangres/wiki
https://github.com/ThibTrip/pangres/wiki/Fix-bad-column-names-postgres
Example code:
# From: <https://github.com/ThibTrip/pangres/wiki/Fix-bad-column-names-postgres>
import pandas as pd
# fix bad col/index names with default replacements (empty string for '(', ')' and '%'):
df = pd.DataFrame({'test()':[0],
'foo()%':[0]}).set_index('test()')
print(df)
test() foo()%
0 0
# clean cols, index w/ no replacements
df_fixed = fix_psycopg2_bad_cols(df)
print(df_fixed)
test foo
0 0
# fix bad col/index names with custom replacements - you MUST provide replacements for '(', ')' and '%':
# reset df
df = pd.DataFrame({'test()':[0],
'foo()%':[0]}).set_index('test()')
# clean cols, index w/ user-specified replacements
df_fixed = fix_psycopg2_bad_cols(df, replacements={'%':'percent', '(':'', ')':''})
print(df_fixed)
test foopercent
0 0
Will only fix/correct some of the bad characters:
Replaces '%', '(' and ')' (characters that won't play nicely or even at all)
But, useful in that it handles cleanup and upsert.
(p.s., I know this post is over 4 years old, but still shows up in Google results when searching for "pangres upsert determine number inserts and updates" as the top SO result, dated May 13, 2020.)
When using Pandas no iteration is needed. Isn't that faster?
df = pd.read_csv(csv_file,sep=';',names=['column'])
df.to_sql('table', con=con, if_exists='append', index=False, chunksize=20000)

MySQL Dynamic Query Statement in Python with Dictionary

Very similar to this question MySQL Dynamic Query Statement in Python
However what I am looking to do instead of two lists is to use a dictionary
Let's say i have this dictionary
instance_insert = {
# sql column variable value
'instance_id' : 'instnace.id',
'customer_id' : 'customer.id',
'os' : 'instance.platform',
}
And I want to populate a mysql database with an insert statement using sql column as the sql column name and the variable name as the variable that will hold the value that is to be inserted into the mysql table.
Kind of lost because I don't understand exactly what this statement does, but was pulled from the question that I posted where he was using two lists to do what he wanted.
sql = "INSERT INTO instance_info_test VALUES (%s);" % ', '.join('?' for _ in instance_insert)
cur.execute (sql, instance_insert)
Also I would like it to be dynamic in the sense that I can add/remove columns to the dictionary
Before you post, you might want to try searching for something more specific to your question. For instance, when I Googled "python mysqldb insert dictionary", I found a good answer on the first page, at http://mail.python.org/pipermail/tutor/2010-December/080701.html. Relevant part:
Here's what I came up with when I tried to make a generalized version
of the above:
def add_row(cursor, tablename, rowdict):
# XXX tablename not sanitized
# XXX test for allowed keys is case-sensitive
# filter out keys that are not column names
cursor.execute("describe %s" % tablename)
allowed_keys = set(row[0] for row in cursor.fetchall())
keys = allowed_keys.intersection(rowdict)
if len(rowdict) > len(keys):
unknown_keys = set(rowdict) - allowed_keys
print >> sys.stderr, "skipping keys:", ", ".join(unknown_keys)
columns = ", ".join(keys)
values_template = ", ".join(["%s"] * len(keys))
sql = "insert into %s (%s) values (%s)" % (
tablename, columns, values_template)
values = tuple(rowdict[key] for key in keys)
cursor.execute(sql, values)
filename = ...
tablename = ...
db = MySQLdb.connect(...)
cursor = db.cursor()
with open(filename) as instream:
row = json.load(instream)
add_row(cursor, tablename, row)
Peter
If you know your inputs will always be valid (table name is valid, columns are present in the table), and you're not importing from a JSON file as the example is, you can simplify this function. But it'll accomplish what you want to accomplish. While it may initially seem like DictCursor would be helpful, it looks like DictCursor is useful for returning a dictionary of values, but it can't execute from a dict.