when we run select convert(field using utf8) from the MySQL tool (which is on Windows using ODBC) my query works fine. When I run the same query from NIFI (which is on Linux using JDBC) my query does not work. What setting am I missing?
the JDBC connection string is
jdbc:mysql://10.10.x.x:y/warehouse
We are not getting an error. The one field that is being converted is empty.
What is the full query you are running? When you use a function as a field to select from, the name returned is usually something funky, and due to the strict rules on field naming in Avro (which ExecuteSQL converts the data into), it often complains so you have to use an alias, something like select convert(field using utf8) as field from myTable.
But if you're not getting an error, I suspect maybe there's some session variable that is being set on your external MySQL tool and is not being set for NiFi's session. In the upcoming NiFi 1.9.0 release (via NIFI-5780) you'll be able to issue SET statements and such for the session, before executing your main query.
Otherwise, it's possible that the JDBC driver is returning a type to NiFi that it is not handling correctly (although for UTF8 I would expect it would report a String). I tried this with a BINARY(10) field and it works for me (using the latest master, not a released version), so I'm not sure what's going on there.
Related
I have a .NET 6 application that is currently backed by MSSQL database, that is maintained by Entity Framework using a Model First approach. I am trying to migrate it to use a MySQL database backend, for a variety of reasons.
I have installed MySQL Locally (Windows) to start exploring and getting it working. I can migrate the schema easily enough (With either MySQL Workbench or using EF) but migrating the data is proving to be a little tricky.
Around half of the tables migrated fine, but the other half, relating to string data, are failing due to errors which look a little like this - the column obviously differs from table to table. The source data is nvarchar in SQL, and the destination is type `varchar'
Statement execution failed: Incorrect string value: '\xF0\x9F\x8E\xB1' for column 'AwayNote'
Does anyone know how I can get the Migration to run successfully?
The research I have read has said to ensure server and table character sets are aligned as per the below.
I have set up my Source as SQL using the ODBC FreeTDS
The data import screen is set up like this - the check box doesn;t seem to affect things especially.
I have MySQL setup with this too, which I have also read is important.
[mysql]
default-character-set = utf8mb4
I am getting errors in Mac OS Coldfusion 2016 reading a mySQL 5.6.41 database with field type of datetime. A simple cfquery select * with cfdump produces java class error "java.time.LocalDateTime" on the datetime fields while producing expected data output in all other fields.
Attempting to output the field value as text, it returns the date/time with a T separator '2021-02-07T15:32:54' (which could be parsed).
But no ColdFusion date/time functions work due to this format.
The data was exported from mySQL 5.6.19 via SQL export using Sequel Pro and imported into the new 5.6.41 instance. All code runs fine on the previous server. I have attempted using the installed mySQL 5 datasource in ColdFusion and a JDBC driver. Both connect fine, but produce same DATETIME format.
Changing the field type to DATE or TIMESTAMP allows the CFDUMP to display without error in the DATETIME fields (obviously minus TIME if DATE).
There is a large amount of labor/overhead involved to not be able to keep DATETIME working as built (plus I believe its the correct field type). I have run out of google options and hoping someone can explain the difference and reason and solution the Coldfusion 2016 will not query data in the same manner as similar code/server.
The only way I solved this was to remove mysql-connector-java-8.0.28.jar and replace it with an older version - mysql-connector-java-5.1.38-bin.jar in my case. So, the problem comes from the mySQL connector.
I'm fixing a bug in a proprietary piece of software, where I have some kind of JDBC Connection (pooled or not, wrapped or not,...). I need to detect if it is a MySQL connection or not. All I can use is an SQL query.
What would be an SQL query that succeeds on MySQL each and every time (MySQL 5 and higher is enough) and fails (Syntax error) on every other database?
The preferred way, using JDBC Metadata...
If you have access to a JDBC Connection, you can retrieve the vendor of database server fairly easily without going through an SQL query.
Simply check the connection metadata:
string dbType = connection.getMetaData().getDatabaseProductName();
This will should give you a string that beings with "MySQL" if the database is in fact MySQL (the string can differ between the community and enterprise edition).
If your bug is caused by the lack of support for one particular type of statement which so happens that MySQL doesn't support, you really should in fact rely on the appropriate metadata method to verify support for that particular feature instead of hard coding a workaround specifically for MySQL. There are other MySQL-like databases out there (MariaDB for example).
If you really must pass through an SQL query, you can retrieve the same string using this query:
SELECT ##version_comment as 'DatabaseProductName';
However, the preferred way is by reading the DatabaseMetaData object JDBC provides you with.
Assuming your interesting preconditions (which other answers try to work around):
Do something like this:
SELECT SQL_NO_CACHE 1;
This gives you a single value in MySQL, and fails in other platforms because SQL_NO_CACHE is a MySQL instruction, not a column.
Alternatively, if your connection has the appropriate privileges:
SELECT * FROM mysql.db;
This is an information table in a database specific to MySQL, so will fail on other platforms.
The other ways are better, but if you really are constrained as you say in your question, this is the way to do it.
MySql may be the only db engine that uses backticks. That means something like this should work.
SELECT count(*)
FROM `INFORMATION_SCHEMA.CHARACTER_SETS`
where 1=3
I might not have the backticks in the right spot. Maybe they go like this:
FROM `INFORMATION_SCHEMA`.`CHARACTER_SETS`
Someone who works with MySql would know.
I am new SSIS and the package that i am building involves the use of Merge Join. Join is performed between a RAW File and Oracle Table. The NLS_SORT and NLS_COMP option for oracle database is "BINARY". RAW File, by default picks up Windows Collation. What is the equivalent of Windows Collation in ORACLE? Will the above settings would work or some workaround is required, since i am not getting desired results from Merge Join. I had even used SELECT.... ORDER BY NLSSORT(EmployeeName, 'BINARY_CI'), but still getting wrong results. Do anyone have idea?
Use Data Conversion element for both Sources,before Sort element. Choose same data type for both columns NLS_SORT and NLS_COMP , now you can JOIN on new columns with new data types.
Some youtube example on data conversion
I'm debugging an old application in Delphi 5, connected with a recent version of MySql via ODBC connector. When using a CAST conversion function, even the following query:
select cast(1 as char)
returns an empty column without column name.
If I run the query directly into the mysql query analyzer it runs fine, so I suppose the problem is in the ODBC connector or in BDE.
The only information I can find on this is this (emphasis mine):
Connector/ODBC erroneously reported that it supported the CAST() and CONVERT() ODBC
functions for parsing values in SQL statements, which could lead to bad SQL generation
during a query.
Could it be that the connector does not support CAST at all?
Try creating a stored procedure in the database to perform the CAST and hide it from ODBC.