I trigger my dag with the API from a lambda function with a trigger on a file upload. I get the file path from the lambda context
i.e. : ingestion.archive.dev/yolo/PMS_2_DXBTD_RTBD_2021032800000020210328000000SD_20210329052822.XML
I put this variable in the API call to get it back as "{{ dag_run.conf['file_path'] }}"
At some point, I need to extract information from this string by splitting it by / so inside the DAG to use the S3CopyObjectOperator.
So here the first approach I had
from datetime import datetime
from airflow import DAG
from airflow.providers.amazon.aws.operators.s3_copy_object import S3CopyObjectOperator
from airflow.operators.python_operator import PythonOperator
default_args = {
'owner': 'me',
}
s3_final_destination = {
"bucket_name": "ingestion.archive.dev",
"verification_failed": "validation_failed",
"processing_failed": "processing_failed",
"processing_success": "processing_success"
}
def print_var(file_path,
file_split,
source_bucket,
source_path,
file_name):
data = {
"file_path": file_path,
"file_split": file_split,
"source_bucket": source_bucket,
"source_path": source_path,
"file_name": file_name
}
print(data)
with DAG(
f"test_s3_transfer",
default_args=default_args,
description='Test',
schedule_interval=None,
start_date=datetime(2021, 4, 24),
tags=['ingestion', "test", "context"],
) as dag:
# {"file_path": "ingestion.archive.dev/yolo/PMS_2_DXBTD_RTBD_2021032800000020210328000000SD_20210329052822.XML"}
file_path = "{{ dag_run.conf['file_path'] }}"
file_split = file_path.split('/')
source_bucket = file_split[0]
source_path = "/".join(file_split[1:])
file_name = file_split[-1]
test_var = PythonOperator(
task_id="test_var",
python_callable=print_var,
op_kwargs={
"file_path": file_path,
"file_split": file_split,
"source_bucket": source_bucket,
"source_path": source_path,
"file_name": file_name
}
)
file_verification_fail_to_s3 = S3CopyObjectOperator(
task_id="file_verification_fail_to_s3",
source_bucket_key=source_bucket,
source_bucket_name=source_path,
dest_bucket_key=s3_final_destination["bucket_name"],
dest_bucket_name=f'{s3_final_destination["verification_failed"]}/{file_name}'
)
test_var >> file_verification_fail_to_s3
I use the PythonOperator to check the value I got to debug.
I have the right value in file_path but I got in file_split -> ['ingestion.archive.dev/yolo/PMS_2_DXBTD_RTBD_2021032800000020210328000000SD_20210329052822.XML']
It's my str in a list and not each part splited like ["ingestion.archive.dev", "yolo", "PMS_2_DXBTD_RTBD_2021032800000020210328000000SD_20210329052822.XML"].
So what's wrong here?
In Airflow the Jinja rendering is not done until task runtime, however, since the parsing of the file_path value as written is performed as top-level code (i.e. outside of an Operator's execute() method or DAG instantiation, the file_path value is initialized as [" {{ dag_run.conf['file_path'] }}"] by the Scheduler. Then when the task is executed, the Jinja rendering begins which is why you see ["ingestion.archive.dev/yolo/PMS_2_DXBTD_RTBD_2021032800000020210328000000SD_20210329052822.XML"] as the value because there is no "/" in the initialized string.
Even if you explicitly split the string within the Jinja expression like file_split="{{ dag_run.conf.file_path.split('/') }}" the value will then be the string representation of the list and not a list object.
However, in Airflow 2.1, you can set render_template_as_native_obj=True as a DAG parameter which will render templated values to a native Python object. Now the string split will render as a list as you expect:
As best practice, you should avoid top-level code since it's executed every Scheduler heartbeat which could lead to some performance issues in your DAG and environment. I would suggest passing the "{{ dag_run.conf['file_path'] }}" expression as an argument to the function which needs it and then execute the parsing logic within the function itself.
Related
I have this dag code below.
import pendulum
from airflow import DAG
from airflow.decorators import dag, task
from custom_operators.profile_data_and_update_test_suite_operator import ProfileDataAndUpdateTestSuiteOperator
from custom_operators.validate_data_operator import ValidateDataOperator
from airflow.models import Variable
connstring = Variable.get("SECRET_SNOWFLAKE_DEV_CONNECTION_STRING")
#dag('profile_and_validate_data', schedule_interval=None, start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), catchup=False)
def taskflow():
profile_data = ProfileDataAndUpdateTestSuiteOperator(
task_id="profile_data",
asset_name="{{ dag_run.conf['asset_name'] }}",
data_format="sql",
connection_string=connstring
)
validate_data = ValidateDataOperator(
task_id="validate_data",
asset_name="{{ dag_run.conf['asset_name'] }}",
data_format="sql",
connection_string=connstring,
trigger_rule="all_done"
)
profile_data >> validate_data
dag = taskflow()
But the asset_name parameter is showing up the raw string of "{{ dag_run.conf['asset_name'] }}" rather than the configuration that is parsed when you trigger the dag and parsed with jinja.
What am I doing wrong here?
BaseOperator has a field "template_fields" that contains all the field name that during the run Airflow would replace it values according to Jinja template.
You need to specify in your Custom Operators (ProfileDataAndUpdateTestSuiteOperator, ValidateDataOperator) the field "asset_name"
template_fields: Sequence[str] = (asset_name, )
render_template_as_native_obj is set to False by default on the DAG. Setting it to False returns strings, change it True to get the native obj.
#dag('profile_and_validate_data', schedule_interval=None, start_date=pendulum.datetime(2021, 1, 1, tz="UTC"), catchup=False, render_template_as_native_obj=True)
My use case is i have an S3 event which triggers a lambda (upon an S3 createobject event), which in turn invokes an Airflow DAG passing in a couple of --conf values (bucketname, filekey).
I am then extracting the key value using a Python operator and storing in an xcom variable. I then want to extract this xcom value within a S3ToSnowflakeOperator and essentially load the file into a Snowflake table.
All parts of the process are working bar the extraction of xcom value within the S3ToSnowflakeOperator task. I basically get the following in my logs.
query: [COPY INTO "raw".SOURCE_PARAMS_JSON FROM #MYSTAGE_PARAMS_DEMO/ files=('{{ ti.xcom...]
which looks like the jinja template is not correctly resolving the xcom value.
My code is as follows:
from airflow import DAG
from airflow.utils import timezone
from airflow.operators.python_operator import PythonOperator
from airflow.operators.bash import BashOperator
from airflow.providers.snowflake.transfers.s3_to_snowflake import S3ToSnowflakeOperator
FILEPATH = "demo/tues-29-03-2022-6.json"
args = {
'start_date': timezone.utcnow(),
'owner': 'airflow',
}
with DAG(
dag_id='example_dag_conf',
default_args=args,
schedule_interval=None,
catchup=False,
tags=['params demo'],
) as dag:
def run_this_func(**kwargs):
outkey = '{}'.format(kwargs['dag_run'].conf['key'])
print(outkey)
ti = kwargs['ti']
ti.xcom_push(key='FILE_PATH', value=outkey)
run_this = PythonOperator(
task_id='run_this',
python_callable=run_this_func
)
get_param_val = BashOperator(
task_id='get_param_val',
bash_command='echo "{{ ti.xcom_pull(key="FILE_PATH") }}"',
dag=dag)
copy_into_table = S3ToSnowflakeOperator(
task_id='copy_into_table',
s3_keys=["{{ ti.xcom_pull(key='FILE_PATH') }}"],
snowflake_conn_id=SNOWFLAKE_CONN_ID,
stage=SNOWFLAKE_STAGE,
schema="""\"{0}\"""".format(SNOWFLAKE_RAW_SCHEMA),
table=SNOWFLAKE_RAW_TABLE,
file_format="(type = 'JSON')",
dag=dag,
)
run_this >> get_param_val >> copy_into_table
If I replace
s3_keys=["{{ ti.xcom_pull(key='FILE_PATH') }}"],
with
s3_keys=[FILEPATH]
My operator works fine and the data is loaded into Snowflake. So the error is centered on resolving s3_keys=["{{ ti.xcom_pull(key='FILE_PATH') }}"], i believe?
Any guidance/help would be appreciated. I am using Airflow 2.2.2
I removed the S3ToSnowflakeOperator and replaced with the SnowflakeOperator.
I was then able to reference the xcom value (as above) for the sql param value.
**my xcom value was a derived COPY INTO statement effectively replicating the functionality of the S3ToSnowflakeOperator. With the added advantage of being able to store the metadata file information within the table columns too.
I'm using Airflow 2.2.2 with the latest providers installed as appropriate.
I'm trying to use the Azure and MySQL hooks and have created custom operators with templates defined for what variables can be templated.
When I do so, I get an error saying that conn or var cannot be found
e.g. my passed parameter is
{{ conn.<variable_name> }}
or
{{ var.json.value.<variable_name> }}
I believe this should be possible in > v2.0 but not working for me, any ideas why?
EDIT: Below are snippets of code with some sensitive information removed, let me know if anything else is needed?
DAG error -
Broken DAG: [/home/dags/dag.py] Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/dags/dag.py", line 52, in <module>
wasb_conn_id = {{ conn.wasb }},
NameError: name 'conn' is not defined
task in dag.py
t1 = WasbLogBlobsToCSVOperator(
task_id='task_xyz',
wasb_conn_id = {{ conn.wasb }},
Custom Operator using an extended version of the Microsoft Azure wasb hook , used by dag.py -
class WasbLogBlobsToCSVOperator(BaseOperator):
template_fields = (
'wasb_conn_id',
)
def __init__(
self,
*,
wasb_conn_id: str = 'wasb',
**kwargs,
) -> None:
super().__init__(**kwargs)
self.wasb_conn_id = wasb_conn_id
self.hook = ExtendedWasbHook(wasb_conn_id=self.wasb_conn_id)
There looks to be a few things going on here that should help.
Jinja templates are string expressions. Try wrapping your wasb_conn_id arg in quotes.
wasb_conn_id = "{{ conn.wasb }}",
Templated fields are not rendered until the task runs meaning the Jinja expression won't be evaluated until an operator's execute() method is called. This is why you are seeing an exception from your comment below. The literal string "{{ conn.wasb }}" is being evaluated as the conn_id. If you want to use a template field in the custom operator, you need to move that logic to be in the scope of the execute() method.
Why do you need to use a Jinja expression here? Since the format of accessing the Connection object via Jinja is {{ conn.<my_conn_id> }}, you could just use the value "wasb" directly.
Let's assume I've an operator which needs a python list (or dict) as an argument for it's property
doExampleTask = ExampleOperator(
task_id = "doExampleTask",
property_needs_list = [
("a", "x"),
("b", "y")
],
property_needs_dict = {
"dynamic_field_1": "dynamic_value",
# ...
"dynamic_field_N": "dynamic_value",
},
)
The problem is that I can't define the python data structure of the list (how many list elements is needed) or the dict (which fields were generated) in the creation time of the DAG.
I could only get this structure dynamically by executing a previous task or macro.
the task could write the data structure with dynamic fields into the XCOM
the macro could return a data structure
But in both of the above cases there is no way to convert the dynamic data structure (which is returned by XCOM or custom macro) to python data structure and use it as a property of the operator.
This will not return list or dict:
doExampleTask = ExampleOperator(
task_id = "doExampleTask",
property_needs_list = '{{ generate_list() }}',
property_needs_dict = '{{ generate_dict() }}',
)
This will also not return dict or list:
doExampleTask = ExampleOperator(
task_id = "doExampleTask",
property_needs_list = '{{ ti.xcom_pull(task_ids="PreviousTask", key="list_structure") }}',
property_needs_dict = '{{ ti.xcom_pull(task_ids="PreviousTask", key="dict_structure") }}',
)
If I use something like eval() function, it will not be able to evaluate the string argument in execution time of the Task. It will try to evaluate it in creation time of the DAG, but the values will obviously not be there.
doExampleTask = ExampleOperator(
task_id = "doExampleTask",
property_needs_list = eval('{{ ti.xcom_pull(task_ids="PreviousTask", key="list_structure") }}'),
property_needs_dict = eval('{{ ti.xcom_pull(task_ids="PreviousTask", key="dict_structure") }}'),
)
or
doExampleTask = ExampleOperator(
task_id = "doExampleTask",
property_needs_list = eval('{{ generate_list() }}'),
property_needs_dict = eval('{{ generate_dict() }}'),
)
How can I workaround this problem?
I'm mostly interested in Airflow 1.x, but I'm open to Aitflow 2.x solution.
Thank you!
In Airflow 1, Jinja expressions are always evaluated as strings. You'll have to either subclass the operator or build in logic to your custom operator to translate the stringified list/dict arg as necessary.
However, in Airflow 2.1, there was an option added to render templates as native Python types. You can set render_templates_as_native_obj=True at the DAG level and lists will render as a true list, dict as a true dict, etc. Check out the docs here.
In Airflow 1.x (not 2.x), I want DAG1 to trigger DAG2.
and want to pass DAG1's templated date {{ds}} as a 'conf' dict parameter like {"day":"{{ds}}"} to DAG2, so that DAG2 can access it via {{dag_run.conf['day']}}
But DAG1 just ends up passing the literal string '{{ds}}' instead of '2021-12-03'
DAG2 uses an SSHOperator, not PythonOperator (for which a solution seems to exist)
DAG 1:
from airflow.operators import TriggerDagRunOperator
def fn_day(context, dagrun_order):
dagrun_order.payload = {"day": "{{ds}}"}
return dagrun_order
trig = TriggerDagRunOperator(
trigger_dag_id="ssh",
task_id='trig',
python_callable=fn_day,
dag=dag)
DAG 2 :
from airflow.contrib.operators.ssh_operator import SSHOperator
ssh = SSHOperator(
ssh_conn_id='ssh_vm',
task_id='echo',
command="echo {{dag_run.conf['day']}}"
)