I have a table in mysql with geometry data in one of the columns. The datatype is text and I need to save it as Polygon geometry.
I have tried a few solutions, but keep running into Invalid GIS data provided to function st_polygonfromtext. error.
Here's some data to work with and an example:
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=78ac63e16ccb5b1e4012c21809cba5ff
Table has 25k rows, there are likely some bad geometries in there. When I attempt to update on a subset of rows, it seems to successfully work, like it did in the fiddle example. It fails when I attempt to update all 25k rows.
Someone suggested using wrapping the statements around TRY and CATCH. Detecting faulty geometry WKT and returning the faulty record
I am not too familiar with using them in MySQL or stored procedures either.
I need a spatial index on the table to be able to use spatial functions and filter queries by location.
Plan A: Create a new table and try to convert as you INSERT IGNORE INTO that table from your existing table. I don't know if this will apply the "IGNORE" to conversion failures. Also, you would end up with the "good" values. What do you want to do about the "bad" values?
Plan B: Write a loop in application code -- read one row, convert the varchar value, check for errors.
Related
I'll start by saying I'm new to MySql, at least in the level of my question. :)
I got a data logger with a high data output and I'm interested in saving the data to a database.
I've been wondering if it's possible to filter the INSERT query in the database itself, so it will save only data if certain values appear in the query.
As #Akina mentioned, you can use CHECK CONSTRAINT and INSERT IGNORE. However, It is better not trying to insert any problematic data, since it will slow down insert operation.
you need to filter data before insert operation. You may want to consider writing custom log shipper or if you have option you can use logstash
I have a DB2 11 database with a large table that has JSON data stored in a CLOB column. Given that I'd like to perform queries on it using the JSON_VAL function, I always need to use JSON2BSON to convert it first, which I assume is a significant overhead. I would like to move the data into another table that has exactly the same structure, except for the CLOB column which I'd like to replace with a BLOB one to store the JSON immediately in BLOB, hoping that this will speed up my queries.
My approach to this was writing a
insert into newtable (ID, BLOBDATA) select ID, SYSTOOLS.JSON2BSON(CLOBDATA) from oldtable;
After doing this I realized that long json objects got truncated. I have googled on this and learned that selects to truncate large objects.
I am reaching out to here to see if there is any simple way for me to do this excercise, without having to write a program to read out and write back all the data. (I had myself burnt with similar truncation taking place when I used DB2 csv export features.)
Thanks.
Starting with Db2 11.1.4.4 there are new JSON functions based on the ISO technical paper. I would advise to use them. They are the strategic functionality going forward.
You could use JSON_VALUE to perform the equivalent of what you planned to with JSON_VAL.
Procedure Analyse() suggests the optimal field for my columns. I want to create a new table with optimal field types starting from a table that I already have. At this time I'm running
SELECT * FROM mytable PROCEDURE ANALYSE();
Then I copy the report and manually I write the create statement. Is there a way to do that automatically? Is it more efficient to alter a table with new field types or create a new empty table with optimal field types and re-import data?
In truth you would not want to blindly accept the data types returned by this Analysis as you would / could never be sure what data types were "suggested". This Procedure returns "Suggested" optimal data types that "May" help reduce data storage requirements. The return values also depend on the data in the table you're selecting and could possibly change each time you run this query on new data.
Read more here on dev.mysql
But if you wanted to try something, I would start by building a Procedure of my own that could pass the returned data types recommended into a dynamically created DDL statement that you would need to check for possible incorrect datatypes and then execute the resulting DDL. It might take a little working out in terms of your code but you really should read more on Procedure Analyze()
I have no idea whether this can be done or not, but basically, I have the following data flow:
Extracts the data from an XML file (works fine)
Simply splits the records based on an enclosed condition (works fine)
Had to add a derived column object due to some character set issues (might be better methods, but it works)
Now "Step 4" is where I'm running into a scenario where I'd only like to insert the values that have a corresponding match in my database, for instance, the XML has about 6000 records, and from those, I have maybe 10 of them that I need to match back against and insert them instead of inserting all 6000 of them and doing the compare after the fact (which I could also do, but was hoping there'd be another method). I was thinking that I might be able to perform a sql insert command within the OLE DB DESTINATION object where the ID value in the file matches, but that's what I'm not 100% clear on or if it's even possible for that matter. Should I simply go the temp table route and scrub the data after the fact, or can I do this directly in the destination piece? Any suggestions would be greatly appreciated.
EDIT
Thanks to the last comment from billinkc, I managed to get bit closer, where I can identify the matches and use that result set, but somehow it seems to be running the data flow twice, which is strange.... I took the lookup object out to see whether it was causing it and somehow it seems to be the case, any reason why it would run this entire flow twice with the addition of the lookup? I should have a total of 8 matches, which I confirmed with the data viewer output, but then it seems to be running it a second time for the same file.
Is there a reason you can't use a Lookup transformation to find existing records. Configure it so that it routes non-match records to the no match output and then only connect the match found connector to the "Navigator Staging Manager Funds"
I believe that answers what you've asked but I wonder if you're expressing the right desire? My assumption is the lookup would go against the existing destination and so the lookup returns the id 10 for a row. All of the out of the box destinations in SSIS only perform inserts, so that row that found a match would now get doubled. As you are looking for existing rows, that usually implies you'd want to perform an update to an existing row. If that's the case, there is a specially designed transformation, the OLE DB Command. It is the component that allows for updates. There is a performance problem with that component, it issues a single update statement per row flowing through it. For 10 rows, I think it'd be fine. Otherwise, the pattern you'd use is to write all the new rows (inserts) into your destination table and then write all of your changed rows (updates) into a second staging-type table. After the data flow is complete, then use an Execute SQL Task to perform a set based update statement.
There are third party options that handle combined upserts. I know Pragmatic Works has an option and there are probably others on the tasks and components site.
I'm trying to manually manage some geometry (spatial) columns in a rails model.
When updating the geometry column I do this in rails:
self.geom="POINTFROMTEXT('POINT(#{lat},#{lng})')"
Which is the value I want to be in the SQL updates and so be evaluated by the database. However by the time this has been through the active record magic, it comes out as:
INSERT INTO `places` (..., `geom`) VALUES(...,'POINTFROMTEXT(\'POINT(52.2531519,20.9778386)\')')
In other words, the quotes are escaped. This is fine for the other columns as it prevents sql-injection, but not for this. The values are guaranteed to be floats, and I want the update to look like:
INSERT INTO `places` (..., `geom`) VALUES(...,'POINTFROMTEXT('POINT(52.2531519,20.9778386)')')
So is there a way to turn escaping off for a particular column? Or a better way to do this?
(I've tried using GeoRuby+spatial adapter, and spatial adaptor seems too buggy to me, plus I don't need all the functionality - hence trying to do it directly).
The Rails Spatial Adapter should implement exactly what you need. Although, before I found GeoRuby & Spatial Adapter, I was doing this:
Have two fields: one text field and a real geometry field, on the model
On a after_save hook, I ran something like this:
connection.execute "update mytable set geom_column=#{text_column} where id=#{id}"
But the solution above was just a hack, and this have additional issues: I can't create a spatial index if the column allows NULL values, MySQL doesn't let me set a default value on a geometry column, and the save method fails if the geometry column doesn't have a value set.
So I would try GeoRuby & Spatial Adapter instead, or reuse some of its code (on my case, I am considering extracting only the GIS-aware MysqlAdapter#quote method from the Spatial Adapter code).
You can use an after_save method, write them with a direct SQL UPDATE call. Annoying, but should work.
You should be able to create a trigger in your DB migration using the 'execute' method... but I've never tried it.
Dig into ActiveRecord's calculate functionality: max/min/avg, etc. Not sure whether this saves you much over the direct SQL call in after_save. See calculations.rb.
You could patch the function that quotes the attributes (looking for POINTFROMTEXT and then skip the quoting). This is pretty easy to find, as all the methods start with quote. Start with ActiveRecord::Base #quote_value.