Weka unknown datatype - mysql

I'm trying to import a database form mysql to weka, but the problem is that even after the database is loaded and displayed, when I click ok so I can start working whit the database, the message "unknown data type: INT" appears. I've tried modifying the DatabaseUtil.props file but nothing seems to work, so I really preacite if someone could tell me how to solve this issue once for all.
Thanks

You can either remove the comment from the int datatype lines in the prop file, or take the mysql prop file included, add:
INT=5
Since 5 is the identifier for int types, then rename the mysql props file to DatabaseUtil.props

Put just, without semicolons or others characters
INT=5

Related

Why aren't my functions working as expected in MySQL?

I am trying to figure out why MYSQL isn't working as expected.
I imported my data from a CSV into a table called Products, which is shown in the screenshot. It's a small table of just ID and Name.
But when I run the where clause, finding out where the Name = 'SMS', it returns nothing? I don't understand what the issue is.
My CSV contents in Notepad++ is shown below:
This is what I used to load in my CSV, if there are any errors here.
Could you share your csv file content?
It's happened to me too before, and the problem is because there's some blank space in the data in csv file.
So maybe you could parse first your csv file data (remove the "not needed" blank space) before import it to database
This is often caused by spaces or look-alike characters. If caused by spaces or invisible characters at the beginning/end, you can try:
where name like '%SMS%'
You can then make this more general:
where name like '%S%M%S%'
When you get a match, you'll need to do more investigate to find the actual cause.

Importing CSV database with null character as empty strings

I have hundreds of csv files and each one have lots of null characters in it. It is like that because the some of the cells must be empty. But when I try to import this into MySQL workbench using import wizard i keep getting the same error: "Unhandled exception: line contains NULL byte".
What I would like to do was to:
a) be able to import these database without the error from above
b) converting all null cells as empty strings.
Since there are hundreds of csv files like this one, each around 300mb, replacing the characters before importing doesn't seems to be a quick viable option.
Is there a way to force MySQL Workbench to accept the files with the null character in it?
I have googled many answers, none of which seems to be applicable to this case.
Many thanks
Since MySQL Workbench version 8.0.16 (released on 04/25/2019) there has been an additional option for uploading .csv file -- "null and NULL word as SQL keyword".
When selecting this option as NO, the NULL expression without quotes in .csv file (,NULL, rather than ,"NULL",) will auto-fill empty if your field default value is empty.
I hope this answer could solve other people's similar problem :)

BLOB error when mapping nvarchar columns with the same fixed length in SSIS

I am using SSIS to move data between environments, and I am getting the following error quite often inside Lookup components, when mapping the input columns to the output columns:
I fixed the problem in most locations, and using nvarchar(MAX) as the type was the cause of the problem, but I am still getting it, even when the type of the input and output columns is nvarchar(100). Any idea why I am getting this error? I tried to use a data conversion on the source data before, but without any success!
EDIT
Below you can find screenshots from my lookup's configuration (named lookup update rows)
EDIT 2
When I open the .dtsx file related with the project in a text editor, I have several datatypes set as nText (like shown below), which I think is the cause of my problem.
dataType="nText"
cachedDataType="nText"
I change these lines to, respectively, the following lines:
dataType="wstr"
length="100"
cachedDataType="wstr"
cachedLength="100"
But when I build, my changes disappear, and the ntext types are once again set.
The solution to get rid of BLOB types is to change the datatypes (SSIS datatypes) for the components within the dataflow in the advanced editor.
For each component, right click on it, and choose "Show advanced
editor"
Click in the column "Input and output properties"
For all the input and output columns listed there, change the datatype when it is DT_NEXT to DT_WSTR, choosing an appropriate length as well
This didn't work for me as I was using an ODBC data source.
I had to CAST my blob table fields as varchar(max) using the SQL command text box in the ODBC Source Editor and then go to the advanced editor and edit all ODBC source Output columns that I had CAST as DataType string[DT_STR].
Hope this helps someone.
What solved my problem is that my source had string constraint of 50 chars while my destination was varchar(max). I changed to metadata in the destination column that was giving me the error from max to 50. Problem solved.

SSIS 2012 extracting bool from .csv failing to insert to db "returned status 2"

Hi all quick question for you.
I have an SSIS2012 package that is reading a flat file (.csv) and is loading it into a SQL Server database table. However, I am getting an error for one of the columns when loading the OLEDB Destination:
[Flat File Source [32]] Error: Data conversion failed. The data conversion for column "Active_Flag" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
I am wondering if this is because in the flat file (which is comma delimited), the values are literally spelled out "TRUE" or "FALSE". The advanced page on the flat file properties has it set to "DT_BOOL" which I thought was right. It was on DT_STRING originally, and that wasn't working either.
In the SQL server table the column is set up as a bit, and allows nulls. Is this because it is literally typed out TRUE/FALSE? What's the easiest way to fix this?
Thanks for any advice!
It actually turned out there was a blank space in front of "True"/"False" in the file. Was just bad data and I missed it. Fixing that solved my issue. Thank you though, I did try that and when that didn't work that's when I knew it was something else.

wikipedia dump phpmyadmin

Im trying to import the database file of wikipedia (titles only, 163M) from http://dumps.wikimedia.org/enwiki/latest/ with phpMyAdmin (I have a wamp configuration). I already changed the values on php.ini and Im receiving this error
There seems to be an error in your SQL query. The MySQL server error output below, if there is any, may also help you in diagnosing the problem
ERROR: Unknown Punctuation String # 1511
Where could be the problem? Do I need to change the coalltion from UTF8 to something else?
Thank you!
#user593712:
Do I need to change the coalltion from UTF8
No, you need to ensure that the database you're importing to is UTF-8. See http://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki#Character_set
The error message tells you that, there is an error around line number 1511. Could you please check the contents around that line. Sometimes, single quote "'" should be replaced by "\'" to import into the database.