How to make the performance operator output insert into database in rapidmienr studio? - rapidminer

i want to insert data in 2nd picture into database. what should i do?
home page process
result of performance operator

you need to transform the output of the performance operator into a regular example set. For this you can use the Performance to Data operator, then you can simply store the data in the database (it can normally only handle regular tabular data)

Related

MySQL - Invalid GIS data provided to function st_polygonfromtext

I have a table in mysql with geometry data in one of the columns. The datatype is text and I need to save it as Polygon geometry.
I have tried a few solutions, but keep running into Invalid GIS data provided to function st_polygonfromtext. error.
Here's some data to work with and an example:
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=78ac63e16ccb5b1e4012c21809cba5ff
Table has 25k rows, there are likely some bad geometries in there. When I attempt to update on a subset of rows, it seems to successfully work, like it did in the fiddle example. It fails when I attempt to update all 25k rows.
Someone suggested using wrapping the statements around TRY and CATCH. Detecting faulty geometry WKT and returning the faulty record
I am not too familiar with using them in MySQL or stored procedures either.
I need a spatial index on the table to be able to use spatial functions and filter queries by location.
Plan A: Create a new table and try to convert as you INSERT IGNORE INTO that table from your existing table. I don't know if this will apply the "IGNORE" to conversion failures. Also, you would end up with the "good" values. What do you want to do about the "bad" values?
Plan B: Write a loop in application code -- read one row, convert the varchar value, check for errors.

How to migrate CLOB column to (json) BLOB in DB2 without truncating data?

I have a DB2 11 database with a large table that has JSON data stored in a CLOB column. Given that I'd like to perform queries on it using the JSON_VAL function, I always need to use JSON2BSON to convert it first, which I assume is a significant overhead. I would like to move the data into another table that has exactly the same structure, except for the CLOB column which I'd like to replace with a BLOB one to store the JSON immediately in BLOB, hoping that this will speed up my queries.
My approach to this was writing a
insert into newtable (ID, BLOBDATA) select ID, SYSTOOLS.JSON2BSON(CLOBDATA) from oldtable;
After doing this I realized that long json objects got truncated. I have googled on this and learned that selects to truncate large objects.
I am reaching out to here to see if there is any simple way for me to do this excercise, without having to write a program to read out and write back all the data. (I had myself burnt with similar truncation taking place when I used DB2 csv export features.)
Thanks.
Starting with Db2 11.1.4.4 there are new JSON functions based on the ISO technical paper. I would advise to use them. They are the strategic functionality going forward.
You could use JSON_VALUE to perform the equivalent of what you planned to with JSON_VAL.

Pentaho Kettle - How to produce update query based on result set?

I come up with Insert query generator from pentaho spoon that writes input data to a text file in the form of a set of SQL statements.
I wonder if there is any method that can be used similar to this but generate update query based on input.
Well, if you need to update a table based on some key columns compared to your stream, you may use the Insert/Update step.
The downside is that it won't generate the statements in a file, it will execute the updates or inserts based on that comparison and that's all.
Can you give more details about your scenario? We may work things out together.
Why do you need a file with UPDATE statements?
Can't we connect to the database and run the updates right away?
Sure use the "Dynamic SQL Row" step.

How to retrieve information after specific strings in my sql

I am making csv file by sql2excell component for joomla. It calls the data by SQL queries, so I retrieve all the information through SQL queries. The problem is some that fields have extra information which I don't want to display to the end user.
For example, the fields show the data below when I retrieve by a query:
a:3:{s:4:"city";s:3:"000";s:3:"ext";s:3:"000";s:3:"tel";s:4:"0000";}
I only want the values that are in quotes, like this: 0000000000
Is there any way I can get this by an SQL query?
No.
These values are serialized and MySQL doesn't have functions to deserialize them. You have to make a bridge between MySQL and sql2excel which would decode it, but Im not familiar with this tool so can't be of much help.

MySQL: Dump a database from a SQL query

I'm writing a test framework in which I need to capture a MySQL database state (table structure, contents etc.).
I need this to implement a check that the state was not changed after certain operations. (Autoincrement values may be allowed to change, but I think I'll be able to handle this.)
The dump should preferably be in a human-readable format (preferably an SQL code, like mysqldump does).
I wish to limit my test framework to use a MySQL connection only. To capture the state it should not call mysqldump or access filesystem (like copy *.frm files or do SELECT INTO a file, pipes are fine though).
As this would be test-only code, I'm not concerned by the performance. I do need reliable behavior though.
What is the best way to implement the functionality I need?
I guess I should base my code on some of the existing open-source backup tools... Which is the best one to look at?
Update: I'm not specifying the language I write this in (no, that's not PHP), as I don't think I would be able to reuse code as is — my case is rather special (for practical purposes, lets assume MySQL C API). Code would be run on Linux.
Given your requirements, I think you are left with (pseudo-code + SQL)
tables = mysql_fetch "SHOW TABLES"
foreach table in tables
create = mysql_fetch "SHOW CREATE TABLE table"
print create
rows = mysql_fetch "SELECT * FROM table"
foreach row in rows
// or could use VALUES (v1, v2, ...), (v1, v2, ...), .... syntax (maybe preferable for smaller tables)
insert = "INSERT (fiedl1, field2, field2, etc) VALUES (value1, value2, value3, etc)"
print insert
Basically, fetch the list of all tables, then walk each table and generate INSERT statements for each row by hand (most apis have a simple way to fetch the list of column names, otherwise you can fall back to calling DESC TABLE).
SHOW CREATE TABLE is done for you, but I'm fairly certain there's nothing analogous to do SHOW INSERT ROWS.
And of course, instead of printing the dump you could do whatever you want with it.
If you don't want to use command line tools, in other words you want to do it completely within say php or whatever language you are using then why don't you iterate over the tables using SQL itself. for example to check the table structure one simple technique would be to capture a snapsot of the table structure with SHOW CREATE TABLE table_name, store the result and then later make the call again and compare the results.
Have you looked at the source code for mysqldump? I am sure most of what you want would be contained within that.
DC
Unless you build the export yourself, I don't think there is a simple solution to export and verify the data. If you do it table per table, LOAD DATA INFILE and SELECT ... INTO OUTFILE may be helpful.
I find it easier to rebuild the database for every test. At least, I can know the exact state of the data. Of course, it takes more time to run those tests, but it's a good incentive to abstract away the operations and write less tests that depend on the database.
An other alternative I use on some projects where the design does not allow such a good division, using InnoDB or some other transactional database engine works well. As long as you keep track of your transactions, or disable them during the test, you can simply start a transaction in setUp() and rollback in tearDown().