I have tried three of them below. But nothing works.
Error: - TypeError: Invalid argument(s) 'fast_executemany' sent to create_engine(), using configuration MSDialect_pyodbc/QueuePool/Engine. Please check that the keyword arguments are appropriate for this combination of components.
engine = create_engine('mysql+pyodbc://{DB}:{Password}#{server}', fast_executemany=True)
engine = create_engine('mysql+mysqlconnector://{DB}:{Password}#{server}', fast_executemany=True)
engine = create_engine('mysql+pymysql://{DB}:{Password}#{server}', fast_executemany=True)
Related
I've got a confusing issue on Airflow which I don't understand.
I have a SQL scripts folder at DML/analytics/my_script.sql. The MySQL operator works perfectly in normal circumstances, but does not when I try to call it from a Python operator as follows. This is necessitated by needing to pass in XCOM values from another task:
def insert_func(**kwargs):
run_update = MySqlOperator(
sql='DML/analytics/my_script.sql',
task_id='insert_func',
mysql_conn_id="bi_mysql",
params={
"table_name": table_name,
'ts': kwargs['task_instance'].xcom_pull(key='return_value',task_ids='get_existing_data')
},
)
run_update.execute(context=kwargs['task_instance'])
with DAG("my_dag", **dag_params) as dag:
with TaskGroup(group_id='insert') as insert:
get_existing_data = PythonOperator(
task_id='get_existing_data',
python_callable=MySQLGetRecord,
op_kwargs={
'target_db_conn_id':'bi_mysql',
'target_db':'analytics',
'sql': f'SELECT invoice_date FROM analytics.{table_name} ORDER BY 1 DESC'
}
),
insert = PythonOperator(
task_id='insert',
python_callable=insert_func
)
get_existing_data >> insert_func
The error I get is: MySQLdb._exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DML/analytics/my_script.sql' at line 1")
Clearly it is trying to run the literal string passed in the sql parameter rather than using it as a file location. Why is this happening? Again, this works if I move the run_update task to the my_dag with clause, but I need to do it this way to get the XCOM value from get_existing_data, correct...?
When you are using operator as normal (e.g to be used by Airflow) then Airflow is responsible for the whole task lifecycle. This means Airflow handles the templating, executing pre_execute(), executing execute(), executing on_faulure/retries etc...
What you did is using operator inside operator -> PythonOperator that contains MySqlOperator. In this case the inner operator (MySqlOperator) is just a regular Python class. While it's called Operator - it's is not a "real" Operator.
You are not enjoying any of the lifecycle steps as you might expect.
You might have already realised it as by your own example you specifically triggered the execute():
run_update.execute(context=kwargs['task_instance'])
Notice you didn't need to do this for the PythonOperaor.
You can see in the code base that Airflow invokes render_templates before it invokes pre_execute() and before it invokes execute().
This means that if you want the MySqlOperator to be templated you need to call the function that does the templating before you invoke the execute()
That said - I strongly encourage you - Do not use operator inside operator.
From your code I don't see reason why you can't just use MySqlOperator directly without the PythonOperaor but should there be a reason the proper way to handle it is to create a CustomMySqlOperator that handles the logic you seek. By doing so you will not have problems with using .sql files.
I'm getting the following error message when trying to reflect any of my SQL views:
sqlalchemy/dialects/mysql/reflection.py", line 306, in _describe_to_create
buffer.append(" ".join(line))
TypeError: sequence item 2: expected str instance, bytes found
I have tried using both the autoload_with and autoload=True options in my select query constructor to no avail.
I have the appropriate permissions on my view. My query is pretty simple:
company_country = Table('company_country', metadata, autoload_with=engine)
query = select(company_country.c.country)
return query
I've tried the inspect utility and it does not list my SQL view, nor does the reflecting all tables described below the views section on this page: https://docs.sqlalchemy.org/en/14/core/reflection.html#reflecting-views
I'm using version SQLAlchemy->1.4.32, Python 3.x and mySQL 8.0.28 on Mac if that's any help
I should add that I can query my SQL views using the text() constructor but it would be far more preferable to use select() if possible.
Any tips appreciated
I was using the mysql-connector client for interop with other code, but after switching to the mysqlclient, I was able to reflect the views.
https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.mysqldb
I see lots of this question is about sqlite, but mine is to MySQL.
my entire script is like this:
df = pd.read_csv("df.csv")
engine = sqlalchemy.create_engine('mysql+mysqlconnector://{0}:{1}#{2}/{3}'.
format(config.user, config.passwd,
config.host, config.db))
df.to_sql('SQL_table', con=engine, if_exists='append', index=False)
Then it returns the error:
'Engine' object has no attribute 'cursor'
I googled, and followed some solutions, one of them is:
df = pd.read_csv("df.csv")
engine = sqlalchemy.create_engine('mysql+mysqlconnector://{0}:{1}#{2}/{3}'.
format(config.user, config.passwd,
config.host, config.db))
connection = engine.raw_connection()
df.to_sql('SQL_table', con=connection, if_exists='append', index=False)
Then the error changed to:
DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': Not all parameters were used in the SQL statement
I am using MySQL, not sqlite, i don't understand why it returns this error.
So basically, i think the solution is not working, and would anyone please tell me how to fix this problem, my SQLalchemy is 1.4.27
I have solved this, i reset my Mac, then come back to my VS Code, and start the notebook again, the problem is gone.
But before that, i tried also using command
reset
that can't do the trick. It has to be machine hard reset.
We are performing HA InnoDb in Mysql. For that we need 3 instance to perform failover.
While running this query in msql-js we got an error of invalid object member.
Query - dba.deploySandboxInstance(3310);
Error - Invalid object member deploySanboxInstance (AttributeError)
The error message is clear: Invalid object member deploySanboxInstance (AttributeError)
The function is named "deploySandboxInstance".
Note that sandboxes are not suitable for production environments and should be used for testing purposes only.
Please read the user guide as it contains all the information you're looking for:
https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-userguide.html
I'm trying to use autoscale module from boto. I reach to create an API connexion and get all groups in the default region(us-east-1).
conn = AutoScaleConnection(ACCESS_KEY,SECRET_KEY)
print conn.get_all_groups()
Now I need to create a connexion on the region eu-west-1, but I've always an error.
conn = AutoScaleConnection(ACCESS_KEY,SECRET_KEY)
autoscale = boto.ec2.autoscale.connect_to_region('eu-west-1')
Error:
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials
If I try with that:
autoscale = boto.ec2.autoscale.connect_to_region('eu-west-1',ACCESS_KEY,SECRET_KEY)
Error:
TypeError: connect_to_region() takes exactly 1 argument (3 given)
You have to pass additional parameters as keyword parameters, e.g.:
boto.ec2.autoscale.connect_to_region('us-west-2', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
Alternatively, you could put your credentials in a boto config file (~/.boto) or in environment variables and boto will find them.