how to use LookUpRecord processor? - csv

I tried to join two csv file based on id with respect to the below reference.
How to join two CSVs with Apache Nifi
i'm using NiFi-1.3.0
Now i have two csv files.
1.custom.csv
No,Name,ID,Age
1,Hik,2201,33
2,Kio,3300,22
2.gender.csv
ID,Name,Sex
2201,Hik,Male
3300,Kio,Female
I try to combine those tables with "ID" like following endresult.
No,Name,Sex,ID,Age
1,Hik,Male,2201,33
2,Kio,Female,3300,22
I have using following processor structure.
GetFile-SplitText-ExtractText-LookUpRecord-PutFile
In that lookup record i have configured
RecordReader = "CSVReader"
RecordWriter="FreeFormTextRecordSetWriter"
LookUpService="SimpleCSVFileLookUpService"
ResultRecordPath-->/Key
key-->/ID
In that LookUpService i have given path of the "gender.csv" and setted LookUpKeyColumn and LookUpValueColumn to be "ID".
In that FreeFormTextRecordSetWriter i have given text value"${No},${Name},${ID},${Age},${Sex}"
It yields below result only.
No,Name,ID,Age,
1,Hik,2201,33,
2,Kio,3300,22,
It doesn't have "sex" column.
I think i haven't configured correctly.
i don't know how to use ResultRecordPath & one dynamic attribute(Key) specification in LookUpRecord?
Can anyone guide me to solve my issue?

This was asked and responded to on the apache nifi mailing list.
https://lists.apache.org/thread.html/5041018de5a5be773055bb2709427eed4131c3923262b55051fb1324#%3Cusers.nifi.apache.org%3E

Related

azure data flow I cant pass json string to the sink

I'm using azure data factory to transform data, I have a derived column to process images.
iif(isNull(column1_images),
iif(isNull(column2_images),
iif(isNull(images),'N/A',toString(images))
,concat(toString(column12_images),' ','(',column2_type,')')),
concat(toString(column1_images),' ','(',column1_type,')'))
when I click on refresh buton I can see the result :
but when I pass this column to the sink I'M GETTING THIS ERROR :
Conversion from StringType to ArrayType(StructType(StructField(url,StringType,true)),false) not defined
can you tell me what is the problem please ?
The error you are getting because there is no Schema designed for the Sink. Hence, you need to use Derived column expression and create a JSON schema to convert the data. Check Building schemas using the expression builder.
You can also check this similar kind of thread for your reference.

Prisma Unsupported("point") MySql Approach

So I have my location column using Point data type, I'm using Apollo Server and Prisma, and when I use "npx prisma db pull" generates this data type because is not currently supported on Prisma (generated script)
so I say "Ok, I'm using string and I manage how to insert this data type" so I changed to this script, surprise! didn't work enter image description here, try to find any approach to handling MySql Point data type in Prisma but no info at soever, I really appreciate any ideas
You cannot convert it to String and use it as it isn't supported yet. You need to leave it as unsupported and you can only add data via raw queries.
For now, only adding data is supported. You cannot query for it using PrismaClient.
We can query data using Prisma Client, via raw queries as SELECT id, ST_AsText(geom) as geom from training_data where geom has dataType geometry for using Unsupported("geometry").

SAS pass through - Extract from MySQL does not work

I'm trying to build a Data Integration job uses pass through to extract data from a view in a MySQL database.
Wev'e been using pass through a lot in the project, mostly extracting data from Redshift,
however with MySQL I was not able to do make it work properly.
It keeps complaining a table is missing even though when pass through is off, view is found and data is extracted...
tried every trick I know, starting from enabling case-sensitive DBMS object names, to manually remove single/double quotes from the statement just in case MySQL confuses confuses it with something else...
No luck.
ODBC driver is [MySQL][ODBC 5.3(a) Driver][mysqld-5.5.53].
Ran on a Windows environment.
Any idea how to solve this?
Thank you in advance.
EDIT
So, first of all, one correction (even though not that important - I extract from a view, not a table).
This is the code generated by SAS Create Table transformation, pass through enabled. I only put an asterisk instead of the full list of columns:
proc sql;
connect to ODBC
(
READBUFF=10000 DATASRC="cmp.web_api" AUTHDOMAIN="MYSQL_CMP_Auth"
);
create table work."W7ZZZKOC"n as
select
*
from connection to ODBC
(
select
V_BI_ACCOUNT.ACCOUNT_NAME,
V_BI_ACCOUNT.ACQUISITION_SOURCE__C,
V_BI_ACCOUNT.ZUORA__ACTIVE__C,
V_BI_ACCOUNT.ADDRESS_LINE_1__C,
V_BI_ACCOUNT.ADDRESS_LINE_2__C,
V_BI_ACCOUNT.ADDRESS_LINE_3__C,
V_BI_ACCOUNT.AGREEMENT_DATE,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_1__C,
V_BI_ACCOUNT.AGREEMENT_LEGAL_CLAUSE_2__C,
V_BI_ACCOUNT.PERSONBIRTHDATE,
V_BI_ACCOUNT.BLOCKED_REASON__C,
V_BI_ACCOUNT.BRAND__C,
V_BI_ACCOUNT.CPN__C,
V_BI_ACCOUNT.ACCCREATEDBYID,
V_BI_ACCOUNT.ACCCREATEDDATE,
V_BI_ACCOUNT.CURRENCY_PREFERENCE__C,
V_BI_ACCOUNT.CUSTOMER_FULL_NAME__PC,
V_BI_ACCOUNT.ACCOUNTID,
V_BI_ACCOUNT.ZUORA__CUSTOMERPRIORITY__C,
V_BI_ACCOUNT.DELIVERY_SALUTATION__C,
V_BI_ACCOUNT.DISPLAY_NAME,
V_BI_ACCOUNT.PERSONEMAIL,
V_BI_ACCOUNT.EMAILKEY__C,
V_BI_ACCOUNT.FACEBOOKKEY,
V_BI_ACCOUNT.FIRSTNAME,
V_BI_ACCOUNT.GENDER__C,
V_BI_ACCOUNT.PHONE,
V_BI_ACCOUNT.ACCLASTACTIVITYDATE,
V_BI_ACCOUNT.ACCLASTMODIFIEDDATE,
V_BI_ACCOUNT.LASTNAME,
V_BI_ACCOUNT.OTHER_EMAIL__C,
V_BI_ACCOUNT.PI_TYPE__C,
V_BI_ACCOUNT.ACCPARENTID,
V_BI_ACCOUNT.POSTCODE__C,
V_BI_ACCOUNT.PRIMARY_ACCOUNT_OF_THIS_CUSTOMER,
V_BI_ACCOUNT.ACCPRIMARY__C,
V_BI_ACCOUNT.ACCREASON_FOR_STATUS__C,
V_BI_ACCOUNT.ZUORA__SLA__C,
V_BI_ACCOUNT.ZUORA__SLASERIALNUMBER__C,
V_BI_ACCOUNT.SALUTATION,
V_BI_ACCOUNT.ACCSYSTEMMODSTAMP,
V_BI_ACCOUNT.PERSONTITLE,
V_BI_ACCOUNT.ZUORA__UPSELLOPPORTUNITY__C,
V_BI_ACCOUNT.X_CODE__C,
V_BI_ACCOUNT.ZUORA__ACCOUNT_ID__C,
V_BI_ACCOUNT.ZUORA__PAYMENTMETHODID__C,
V_BI_ACCOUNT.CITY,
V_BI_ACCOUNT.ORIGINAL_CREATED_DATE,
V_BI_ACCOUNT.SOURCE_SYSTEM_ID,
V_BI_ACCOUNT.STATUS,
V_BI_ACCOUNT.ZUORA__CONTACT_ID,
V_BI_ACCOUNT.ACCISDELETED,
V_BI_ACCOUNT.BILLING_ACCOUNT_NAME,
V_BI_ACCOUNT.ACZCREATEDDATE,
V_BI_ACCOUNT.ACZSYSTEMMODSTAMP,
V_BI_ACCOUNT.ACZLASTACTIVITYDATE,
V_BI_ACCOUNT.ZUORA__ACCOUNT__C,
V_BI_ACCOUNT.ZUORA__ACCOUNTNUMBER__C,
V_BI_ACCOUNT.ZUORA__AUTOPAY__C,
V_BI_ACCOUNT.ZUORA__BALANCE__C,
V_BI_ACCOUNT.ZUORA__CREDITCARDEXPIRATION__C,
V_BI_ACCOUNT.ZUORA__CURRENCY__C,
V_BI_ACCOUNT.ZUORA__MRR__C,
V_BI_ACCOUNT.ZUORA__PAYMENTTERM__C,
V_BI_ACCOUNT.ZUORA__PURCHASEORDERNUMBER__C,
V_BI_ACCOUNT.ZUORA__LASTINVOICEDATE__C,
V_BI_ACCOUNT.COUNTRY_NAME,
V_BI_ACCOUNT.COUNTRY_CODE,
V_BI_ACCOUNT.FAVOURITE_FOOTBALL_CLUB,
V_BI_ACCOUNT.COUNTY
from
web_api.V_BI_ACCOUNT as V_BI_ACCOUNT
);
%rcSet(&sqlrc);
disconnect from ODBC;
quit;
And again, when I extract data without pass through - works successfully,
I found out the problem was a column name exceeds 32 positions.
As SAS supports up column names up to 32,
the query fails to find PRIMARY_ACCOUNT_OF_THIS_CUSTOMER as the original column name is PRIMARY_ACCOUNT_OF_THIS_CUSTOMER__C.
EDIT
One more thing I found out is, MySQL doesn't like specifying schema name nor aliases.
Therefore,
From clause to only specify table name i.e : 'from v_bi_account' rather than 'web_api.v_bi_account'
and do not use aliases i.e use 'from v_bi_account' rather than 'from v_bi_account as v_bi_account'
Thank you guys so much for your help.

Neo4j JSON APOC load - skip nulls

I'm trying to load some JSON from a REST API (using Neo4j 3.0.4 & APOC apoc-3.0.4.1-all) that has null values in it. This is throwing up this error:
"Cannot merge node using null property value"
The nulls can be spread across multiple keys and it varies which keys have null values. Hence I'd prefer to avoid specifying which individual keys to handle nulls for if possible.
I found the apoc.map.clean(map,[keys],[values]) procedure but not much info on how to use it. Is this the best procedure to use this for every key or is there an simpler way?
Thanks!
Thanks stdob - I managed to find another post you had written which helped me to understand solution. I need to substitute the first property for one that was never null.
MERGE (label:Label{key2: json.key2}) ON CREATE
SET label.key3 = json.key3, label.key1 = json.key1

Get Redmine custom field value to a file

I'm trying to create a text file that contains the value of a custom field I added on redmine. I tried to get it from an SQL query in the create method of the project_controller.rb (at line 80 on redmine 1.2.0) as follows :
sql = Mysql.new('localhost','root','pass','bitnami_redmine')
rq = sql.query("SELECT value
FROM custom_values
INNER JOIN projects
ON custom_values.customized_id=projects.id
WHERE custom_values.custom_field_id=7
AND projects.name='#{#project.name}'")
rq.each_hash { |h|
File.open('pleasework.txt', 'w') { |myfile|
myfile.write(h['value'])
}
}
sql.close
This works fine if I test it in a separate file (with an existing project name instead of #project.name) so it may be a syntax issue but I can't find what it is. I'd also be glad to hear any other solution to get that value.
Thanks !
(there's a very similar post here but none of the solutions actually worked)
First, you could use Project.connection.query instead of your own Mysql instance. Second, I would try to log the SQL RAILS_DEFAULT_LOGGER.info "SELECT ..." and check if it's ok... And the third, I would use identifier instead of name.
I ended up simply using params["project"]["custom_field_values"]["x"] where x is the custom field's id. I still don't know why the sql query didn't work but well, this is much simpler and faster.