How to increment the year on a datetimefield in Django with update()? - mysql

Is there a way to increment the year on filtered objects using the update() method?
I am using:
python 2.6.5
django 1.2.1 final
mysql Ver 14.14 Distrib 5.1.41
I know it's possible to do something like this:
today = datetime.datetime.today()
for event in Event.objects.filter(end_date__lt=today).iterator():
event.start_date = festival.start_date + datetime.timedelta(365)
event.end_date = festival.end_date + datetime.timedelta(365)
event.save()
However, in some cases, I would prefer to use the update() method.
# This does not work..
Event.objects.all().update(
start_date=F('start_date') + datetime.timedelta(365),
end_date=F('end_date') + datetime.timedelta(365)
)
With the example above, I get:
Warning: Truncated incorrect DOUBLE value: '365 0:0:0'
The sql query it's trying to make is:
UPDATE `events_event` SET `start_date` = `events_event`.`start_date` + 365 days, 0:00:00, `end_date` = `events_event`.`end_date` + 365 days, 0:00:00
I found something in the mysql guide, but this is raw sql!
SELECT DATE_ADD('2008-12-15', INTERVAL 1 YEAR);
Any idea?

One potential cause of "Warning: Data truncated for column X" exception is the use of non-whole day values for the timedelta being added to the DateField - it is fine in python, but fails when written to the mysql db. If you have a DateTimeField, it works too, since the precision of the persisted field matches the precision of the timedelta.
I.e.:
>>> from django.db.models import F
>>> from datetime import timedelta
>>> from myapp.models import MyModel
>>> [field for field in MyModel._meta.fields if field.name == 'valid_until'][0]
<django.db.models.fields.DateField object at 0x3d72fd0>
>>> [field for field in MyModel._meta.fields if field.name == 'timestamp'][0]
<django.db.models.fields.DateTimeField object at 0x43756d0>
>>> MyModel.objects.filter(pk=1).update(valid_until=F('valid_until') + timedelta(days=3))
1L
>>> MyModel.objects.filter(pk=1).update(valid_until=F('valid_until') + timedelta(days=3.5))
Traceback (most recent call last):
...
Warning: Data truncated for column 'valid_until' at row 1
>>> MyModel.objects.filter(pk=1).update(timestamp=F('timestamp') + timedelta(days=3.5))
1L

Quick but ugly:
>>> a.created.timetuple()
time.struct_time(tm_year=2000, tm_mon=11, tm_mday=2, tm_hour=2, tm_min=35, tm_se
c=14, tm_wday=3, tm_yday=307, tm_isdst=-1)
>>> time = list(a.created.timetuple())
>>> time[0] = time[0] + 1
>>> time
[2001, 11, 2, 2, 35, 14, 3, 307, -1]
>>>

from dateutil.relativedelta import relativedelta
yourdate = datetime.datetime(2010, 11, 4, 10, 14, 54, 518749)
yourdate += relativedelta(years=+1)
Relativedelta takes many time arguments, from seconds to years...

Have you seen this: Increasing a datetime field with queryset.update ? I also remember successfully using queryset .update() with timedelta with MySQL backend.

Related

Error message when importing .csv files into MySQL using Python

I am a novice when it comes to Python and I am trying to import a .csv file into an already existing MySQL table. I have tried it several different ways but I cannot get anything to work. Below is my latest attempt (not the best syntax I'm sure). I originally tried using ‘%s’ instead of ‘?’, but that did not work. Then I saw an example of the question mark but that clearly isn’t working either. What am I doing wrong?
import mysql.connector
import pandas as pd
db = mysql.connector.connect(**Login Info**)
mycursor = db.cursor()
df = pd.read_csv("CSV_Test_5.csv")
insert_data = (
"INSERT INTO company_calculations.bs_import_test(ticker, date_updated, bs_section, yr_0, yr_1, yr_2, yr_3, yr_4, yr_5, yr_6, yr_7, yr_8, yr_9, yr_10, yr_11, yr_12, yr_13, yr_14, yr_15)"
"VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"
)
for row in df.itertuples():
data_inputs = (row.ticker, row.date_updated, row.bs_section, row.yr_0, row.yr_1, row.yr_2, row.yr_3, row.yr_4, row.yr_5, row.yr_6, row.yr_7, row.yr_8, row.yr_9, row.yr_10, row.yr_11, row.yr_12, row.yr_13, row.yr_14, row.yr_15)
mycursor.execute(insert_data, data_inputs)
db.commit()
Error Message:
> Traceback (most recent call last): File
> "C:\...\Python_Test\Excel_Test_v1.py",
> line 33, in <module>
> mycursor.execute(insert_data, data_inputs) File "C:\...\mysql\connector\cursor_cext.py",
> line 325, in execute
> raise ProgrammingError( mysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement
MySQL Connector/Python supports named parameters (which includes also printf style parameters (format)).
>>> import mysql.connector
>>> mysql.connector.paramstyle
'pyformat'
According to PEP-249 (DB API level 2.0) the definition of pyformat is:
pyformat: Python extended format codes, e.g. ...WHERE name=%(name)s
Example:
>>> cursor.execute("SELECT %s", ("foo", ))
>>> cursor.fetchall()
[('foo',)]
>>> cursor.execute("SELECT %(var)s", {"var" : "foo"})
>>> cursor.fetchall()
[('foo',)]
Afaik the qmark paramstyle (using question mark as a place holder) is only supported by MariaDB Connector/Python.

How to stop DjangoJSONEncoder from truncating microseconds datetime objects?

I have a dictionary with a datetime object inside it and when I try to json dump it, Django truncates the microseconds:
> dikt
{'date': datetime.datetime(2020, 6, 22, 11, 36, 25, 763835, tzinfo=<DstTzInfo 'Africa/Nairobi' EAT+3:00:00 STD>)}
> json.dumps(dikt, cls=DjangoJSONEncoder)
'{"date": "2020-06-22T11:36:25.763+03:00"}'
How can I preserve all the 6 microsecond digits?
DjangoJsonEncoder support ECMA-262 specification.
You can easily overcome this by introducing your custom encoder.
class MyCustomEncoder(DjangoJSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
r = obj.isoformat()
if r.endswith('+00:00'):
r = r[:-6] + 'Z'
return r
return super(MyCustomEncoder, self).default(obj)
dateime_object = datetime.datetime.now()
print(dateime_object)
print(json.dumps(dateime_object, cls=MyCustomEncoder))
>>> 2020-06-22 11:54:29.127120
>>> "2020-06-22T11:54:29.127120"

too many values to unpack (expected 2) lda

I received error : too many values to unpack (expected 2) , when running the below code. anyone can help me? I added more details.
import gensim
import gensim.corpora as corpora
dictionary = corpora.Dictionary(doc_clean)
doc_term_matrix = [dictionary.doc2bow(doc) for doc in doc_clean]
Lda = gensim.models.ldamodel.LdaModel
ldamodel = Lda(doc_term_matrix, num_topics=3, id2word = dictionary, passes=50, per_word_topics = True, eval_every = 1)
print(ldamodel.print_topics(num_topics=3, num_words=20))
for i in range (0,46):
for index, score in sorted(ldamodel[doc_term_matrix[i]], key=lambda tup: -1*tup[1]):
print("subject", i)
print("\n")
print("Score: {}\t \nTopic: {}".format(score, ldamodel.print_topic(index, 6)))
Focusing on the loop, since this is where the error is being raised. Let's take it one iteration at a time.
>>> import numpy as np # just so we can use np.shape()
>>> i = 0 # value in first loop
>>> x = sorted( ldamodel[doc_term_matrix[i]], key=lambda tup: -1*tup[1] )
>>> np.shape(x)
(3, 3, 2)
>>> for index, score in x:
... pass
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack (expected 2)
Here is where your error is coming from. You are expecting this returned matrix to have 2 elements, however it is a multislice matrix with no simple infer-able way to unpack it. I do not personally have enough experience with this subject material to be able to infer what you might mean to be doing, I can only show you where your problem is coming from. Hope this helps!

plotting a Date from an csv. file in pylab

I'm trying to plot dates from a csv. file column against three other columns of numbers. I'm new to python and have so far managed to import the columns into python and have tried to read them has an array but i can't seem to append them with the datetime module and plot the dates along the x axis along with my data.
Please can anyone help?
At the minute I keep getting the error message:
Traceback (most recent call last):
File "H:\AppliedGIS\Python\woops.py", line 24, in <module>
date = datetime.datetime.strptime['x', '%d/%m/%Y']
AttributeError: type object 'datetime.datetime' has no attribute 'datetime'
But i'm sure i'm going wrong in more than one place...
The data itself is formatted in four columns and when printed looks like this: ('04/03/2013', 7.0, 12.0, 17.0) ('11/03/2013', 23.0, 15.0, 23.0).
Here is the complete code
import csv
import numpy as np
import pylab as pl
import datetime
from datetime import datetime
data = np.genfromtxt('H:/AppliedGIS/Python/AssignmentData/GrowthDistribution/full.csv', names=True, usecols=(0, 1, 2, 3), delimiter= ',', dtype =[('Date', 'S10'),('HIGH', '<f8'), ('Medium', '<f8'), ('Low', '<f8')])
print data
x = [foo['Date'] for foo in data]
y = [foo['HIGH'] for foo in data]
y2 = [foo['Medium'] for foo in data]
y3 = [foo['Low'] for foo in data]
print x, y, y2, y3
dates = []
for x in data:
date = datetime.datetime.strptime['x', '%d/%m/%Y']
dates.append(date)
pl.plot(data[:, x], data[:, y], '-r', label= 'High Stocking Rate')
pl.plot(data[:, x], data[:, y2], '-g', label= 'Medium Stocking Rate')
pl.plot(data[:, x], data[:, y3], '-b', label= 'Low Stocking Rate')
pl.title('Amount of Livestock Grazing per hectare', fontsize=18)
pl.ylabel('Livestock per ha')
pl.xlabel('Date')
pl.grid(True)
pl.ylim(0,100)
pl.show()
The problem is in the way you have imported datetime.
The datetime module contains a class, also called datetime. At the moment, you are just importing the class as datetime, from which you can use the strptime method, like so:
from datetime import datetime
...
x = [foo['Date'] for foo in data]
...
dates=[]
for i in x:
date = datetime.strptime(i,'%d/%m/%Y')
dates.append(date)
Alternatively, you can import the complete datetime module, and then access the datetime class using datetime.datetime:
import datetime
...
x = [foo['Date'] for foo in data]
...
dates=[]
for i in x:
date = datetime.datetime.strptime(i,'%d/%m/%Y')
dates.append(date)

New SQLAlchemy Add Record Error

Newbie to SQLAlchemy.
I'm having trouble adding a record. I modeled the add after the tutorial which passes multiple values (albeit hard coded values.) Attached is the routine and the error.
StackOverflow thinks my 'explanation to code' ratio is off, so I'm adding additional explanation so I can submit my query.
import pdb
from table import wrl
from sqlalchemy import or_, and_, desc, asc
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
rs = create_engine('credentials', echo=True)
aws = create_engine('credentials', echo=True)
rs_session = sessionmaker(bind=rs)
aws_session = sessionmaker(bind=aws)
rs = rs_session()
aws = aws_session()
# pdb.set_trace()
y = rs.query(wrl).order_by(wrl.UUID_PK).first()
cat = y.Added_Timestamp #now we have the oldest record time stamp value
query_string = cat[:8]+"%" #now we have the oldest record's date i.e. substring(20111215_121212;1;8)
move_me = rs.query(wrl).filter(wrl.Added_Timestamp.like(query_string)).limit(10)
pdb.set_trace()
for x in move_me:
# pdb.set_trace()
wrl_rec = wrl(x.UUID_PK,
x.Web_Request_Headers,
x.Web_Request_Body,
x.Current_Machine,
x.Current_Machine,
x.ResponseBody,
x.Full_Log_Message,
x.Remote_Address,
x.basic_auth_username,
x.Request_Method,
x.Request_URI,
x.Request_Protocol,
x.Time_To_Process_Request,
x.User_ID,
x.Error,
x.Added_Timestamp,
x.Processing_Time_Milliseconds,
x.mysql_timestamp)
aws.add(wrl_rec)
aws.commit()
print 'added %s ' % x.UUID_PK
Traceback (most recent call last):
File "migrate.py", line 47, in <module>
x.mysql_timestamp)
TypeError: __init__() takes exactly 1 argument (19 given)
Any suggestions appreciated.
The problem is not really SA related. My conjecture is that your constructor (wrl.__init__(self, ...)) is either not defined, or does not take any positional arguments, which you are trying to specify when creating this object in wrl_rec.
So basically, the error message is pretty much indicating your problem.
On a side-note, does order_by(wrl.UUID_PK) really return the oldest record by the Timestamp as your comment few lines below indicate? Somehow I highly doubt that.