I am using postgresql db with sqlalchemy
When I use the query select now() directly I get a result that can be converted into string, but I can't produce this output using sqlalchemy.
Already I have used the following module which is not giving me the result I needed
from sqlalchemy: import func
The func module is a proxy that creates functions. So func.now() will produce the column you want.
now = session.query(func.now()).scalar()
This returns a Python datetime object.
Related
I am using exporting neo4j all db to json using apoc APIs & again importing with same. Import query executes successfully but cannot find any data in neo4j.
Export query:
CALL apoc.export.json.all('complete-db.json',{useTypes:true, storeNodeIds:false})
Import query:
CALL apoc.load.json('complete-db.json')
When I execute:
MATCH (n) RETURN n
It shows no results found.
This is a little bit confusing but apoc.load.json just reads(loads) data from the JSON File/URL.
It doesn't import the data or create the graph. You need to create the graph(nodes and/or relationships) using the Cypher statements.
In this case, you just read the file and didn't do anything with it so statement executed successfully. Your query isn't an import query, it's a JSON load query.
Refer the following example for import using apoc.load.json:
CALL apoc.load.json('complete-db.json') YIELD value
UNWIND value.items AS item
CREATE (i:Item(name:item.name, id:item.id)
apoc.import.json does what you need.
The export-import process:
Export:
CALL apoc.export.json.all('file:///complete-db.json', {useTypes:true, storeNodeIds:false})
Import:
CALL apoc.import.json("file:///complete-db.json")
(#rajendra-kadam explains why your version does not work, and this is the complementary API call to apoc.export.json.all you were expecting. )
I am importing data into my Python3 environment and then writing it to a MySQL database. However, there is a lot of different data tables, and so writing out each INSERT statement isn't really pragmatic, plus some have 50+ columns.
Is there a good way to create a table in MySQL directly from a dataframe, and then send insert commands to that same table using a dataframe of the same format, without having to actually type out all the col names? I started trying to call column names and format it and concat everything as a string, but it is extremely messy.
Ideally there is a function out there to directly handle this. For example:
apiconn.request("GET", url, headers=datheaders)
#pull in some JSON data from an API
eventres = apiconn.getresponse()
eventjson = json.loads(eventres.read().decode("utf-8"))
#create a dataframe from the data
eventtable = json_normalize(eventjson)
dbconn = pymysql.connect(host='hostval',
user='userval',
passwd='passval',
db='dbval')
cursor = dbconn.cursor()
sql = sqltranslate(table = 'eventtable', fun = 'append')
#where sqlwrite() is some magic function that takes a dataframe and
#creates SQL commands that pymysql can execute.
cursor.execute(sql)
What you want is a way to abstract the generation of the SQL statements.
A library like SQLAlchemy will do a good job, including a powerful way to construct DDL, DML, and DQL statements without needing to directly write any SQL.
I have running an SQL query, which ends up returning * from table ABC.
I am running this in my ruby on rails code by below command:
query:
sql = select * from ABC WHERE <condition>
results = ActiveRecord::Base.connection.exec_query(sql)
I am getting the outputs as results which is of type ActiveRecord::Result
This, I am converting to an array, by using function to_hash provided by ActiveRecord::Result. However, this is an array of Hashes.
Is there a way in which I can convert it to an array of ActiveRecord's
(I need to do further processing with each active record)
For ex: single_result.outdated? (where outdated is a field belonging to another table DEF which is connected to table ABC via single_result.id)
Any help is appreciated. Thanks!
I want to specify the return values for a specific update in sqlalchemy.
The documentation of the underlying update statement (sqlalchemy.sql.expression.update) says it accepts a "returning" argument and the docs for the query object state that query.update() accepts a dictionary "update_args" which will be passed as the arguments to the query statement.
Therefore my code looks like this:
session.query(
ItemClass
).update(
{ItemClass.value: value_a},
synchronize_session='fetch',
update_args={
'returning': (ItemClass.id,)
}
)
However, this does not seem to work. It just returns the regular integer.
My question is now: Am I doing something wrong or is this simply not possible with a query object and I need to manually construct statements or write raw sql?
The full solution that worked for me was to use the SQLAlchemy table object directly.
You can get that table object and the columns from your model easily by doing
table = Model.__table__
columns = table.columns
Then with this table object, I can replicate what you did in the question:
from your_settings import db
update_statement = table.update().returning(table.id)\
.where(columns.column_name=value_one)\
.values(column_name='New column name')
result = db.session.execute(update_statement)
tuple_of_results = result.fetchall()
db.session.commit()
The tuple_of_results variable would contain a tuple of the results.
Note that you would have to run db.session.commit() in order to persist the changes to the database as you it is currently running within a transaction.
You could perform an update based on the current value of a column by doing something like:
update_statement = table.update().returning(table.id)\
.where(columns.column_name=value_one)\
.values(like_count=table_columns.like_count+1)
This would increment our numeric like_count column by one.
Hope this was helpful.
Here's a snippet from the SQLAlchemy documentation:
# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\
where(table.c.name=='foo').values(name='bar')
print result.fetchall()
I need SQLAlchemy to check a database table column for occurrences of python-pickled strings (such as S'foo'\np0\n.), unpickle them (which in this example case would yield foo) , and write them back. How do I do that (efficiently)? (Can I somehow abuse SQLAlchemy's PickleType?)
Okay, found a way using sqlalchemy.sql.expression.func.substr:
from sqlalchemy.sql.expression import func
table.update().where(
and_(table.c.column.startswith("S'"),
table.c.column.endswith("'\np0\n."))
).values({table.c.column:
func.substr(table.c.column,
3,
func.char_length(table.c.column)-8)
}).execute()