I'm using the following SQLAlchemy query in a Flask application to retrieve the next row, given the id of a current row.
current_card_id = int(card_id)
...
next_card = Card.query.filter_by(setId=set_id).filter_by(id>current_card_id).first()
However I get the following error:
TypeError: '>' not supported between instances of 'builtin_function_or_method' and 'int'
And I'm not sure why?
Thank you
Related
I am building a stats table that tracks user data points. The JSON is dynamic and can grow for multiple levels. I'm basically getting an error about invalid JSON using json_merge_patch, which I have used often before. I can not figure out why this is giving me the following error:
ERROR: Invalid JSON text in argument 1 to function json_merge_patch: "Invalid value." at position 0.
insert into
stats.daily_user_stats
VALUES
(null,'2022-02-02',1,18,3,'{"pageviews":{"user":1}}')
on duplicate key update
jdata =
if(
json_contains_path(jdata, 'one', '$.pageviews.user'),
json_set(jdata, '$.pageviews.user', cast(json_extract(jdata, '$.pageviews.user')+1 as UNSIGNED)),
json_merge_patch('jdata','{"pageviews":{"user":1}}')
)
Any help on identifying why the JSON I'm passing to the json_merge_function is not correct?
Solved. The json_merge_patch should look like this:
json_merge_patch(jdata,'{"pageviews":{"user":1}}')
what is the correct way to query JSON_LENGTH(json_column) in sql alchemy for mysql?
I have tried this query but getting error : (pymysql.err.InternalError) (1305, 'FUNCTION test_db.json_array_length does not exist')
query :
self.session.query(func.json_array_length(DbModelName.data)).all()
please find this post SqlAlchemy: Querying the length json field having an array
for mysql please update method to json_length. Else we will get the above error
this is working fine
I'm currently trying to create an ORM model in Peewee for an application. However, I seem to be running into an issue when querying a specific model. After some debugging, I found out that it is whatever below a specific model, it's failing.
I've moved around models (with the given ForeignKeys still being in check), and for some odd reason, it's only what is below a specific class (User).
def get_user(user_id):
user = User.select().where(User.id==user_id).get()
return user
class BaseModel(pw.Model):
"""A base model that will use our MySQL database"""
class Meta:
database = db
class User(BaseModel):
id = pw.AutoField()
steam_id = pw.CharField(max_length=40, unique=True)
name = pw.CharField(max_length=40)
admin = pw.BooleanField(default=False)
super_admin = pw.BooleanField()
#...
I expected to be able to query Season like every other model. However, this the peewee error I run into, when I try querying the User.id of 1 (i.e. User.select().where(User.id==1).get() or get_user(1)), I get an error returned with the value not even being inputted.
UserDoesNotExist: <Model: User> instance matching query does not exist:
SQL: SELECT `t1`.`id`, `t1`.`steam_id`, `t1`.`name`, `t1`.`admin`, `t1`.`super_admin` FROM `user` AS `t1` WHERE %s LIMIT %s OFFSET %s
Params: [False, 1, 0]
Does anyone have a clue as to why I'm getting this error?
Read the error message. It is telling you that the user with the given ID does not exist.
Peewee raises an exception if the call to .get() does not match any rows. If you want "get or None if not found" you can do a couple things. Wrap the call to .get() with a try / except, or use get_or_none().
http://docs.peewee-orm.com/en/latest/peewee/api.html#Model.get_or_none
Well I think I figured it out here. Instead of querying directly for the server ID, I just did a User.get(1) as that seems to do the trick. More reading shows there's a get by id as well.
I have two spark dataframes that I am trying to join. I'm trying to join the two dataframes by the second column ("C1")
Dataframe 1: a
Dataframe 2: b
I load the df like this: I load the CSV data (it is stored in snappy files) via df = sqlContext.read.format("com.databricks.spark.csv").option("quoteMode","NONE").option("delimiter", "|").load(/path/path/path)
I ran this code:
joined = a.join(b, a.C1==b.C1)
This runs immediately; then, when I try to run .head() on this joined dataframe, I get the following error:
ERROR CsvRelation$: Exception while parsing line:
jkjsdklfsd9234lj23234hgy3234|394583495345|5|803|90245|A|NULL|HR44-200|3273205975|N|
Pacific|Y|asdf|asdf|437320597|023861998815|-1|NULL|2018-10-24 20:26:38|2018-10-24
07:53:17|2018-10-19 02:30:19|2018-10-24 20:26:38|Stuff|2019-04-01
12:10:02|2017-10-19 01:39:54|2037-01-01 00:00:00|2017-10-24
13:54:05|N|Y|N|HR54-500|"1":"HR54","2":"C51-500".
java.io.IOException: (line 1) invalid char between encapsulated token and delimiter
After looking online, it seems that the quotes are the issue (at the end of the error), but I don't know how to deal with it. Any suggestions?
I am trying to return a date selected from date picker in to my sql query in my python code. I also tried using encode(utf-8) to remove the unicode string but still, I am getting the error.
I am new to python. Can anyone please help me figure out how to solve this problem? I am using python flask to create the webpage
if request.method=='POST':
dateval2 = request.form['datepick']
dateval = dateval2.encode('utf-8')
result = ("SELECT * FROM OE_TAT where convert(date,Time_IST)='?'",dateval
df = pd.read_sql_query(result,connection)`
Error:
pandas.io.sql.DatabaseError
DatabaseError: Execution failed on sql '("SELECT * FROM OE_TAT where convert(date,Time_IST)='?'", '2015-06-01')': The first argument to execute must be a string or unicode query.
You are providing a tuple to read_sql_query, while the first argument (the query) has to be a string. That's why it gives the error "The first argument to execute must be a string or unicode query".
You can pass the parameter like this:
result = "SELECT * FROM OE_TAT where convert(date,Time_IST)=?"
df = pd.read_sql_query(result, connection, params=(dateval,))
Note that the use of ? depends on the driver you are using (there are different ways to specify parameters, see https://www.python.org/dev/peps/pep-0249/#paramstyle). It is possible you will have to use %s instead of ?.
You could also format the string in beforehand, like result = "SELECT * FROM OE_TAT where convert(date,Time_IST)={0}".format(dateval), however, this is not recommended, see eg here