Importing Geometry from MSSQL to MySQL (Linestring) - mysql

I've been given some data which I am trying to import into mysql, the data was provided in a text file format which is usually fine by me - i know mssql uses different data types so a SQL dump was a none starter...
For some reason mssql must store LINESTRINGS in reverse order, which seemed very odd to me. As a result of this, when i try to upload the file with navicat the import fails. Below is an example of the LINESTRING - as you can see the longitude is first, then the latitude - this is what i believe to be the issue?
LINESTRING (-1.61674 54.9828,-1.61625 54.9828)
Does anybody know how i can get this data into my database?
Im quite new to spatial/geometry extensions.
Thanks,
Paul

must remember that the columns with spatial data have their own data type, navicat it does is call the "toString ()" or "AsText ()" event to display data, but in the background are blob, the advantage is that 2 are based on the standard WKT, I recommend that the db of origin to become space for text in the db destination takes that text and use it to "geometrifromtext" to convert the data (obviously you have to make a script with some programming language, with navicat can not do that)
info wkt
info mysql spatial
info sql server

Related

How to convert EXCEL to SQL (I have 143864 row and 100 column in excel) total 48,316 KB

I convert excel to csv first, then import to phpmyadmin only import 100 rows, I changed config.inc buffer size but still did not changed the result. Could you please help me ???
My main idea to do this, compare two tables on mysql workbench, I have one table already sql, i need excel to convert sql then i can use "compare schemas" creating EER Model of existing database.
Good you described the purpose of this approach. This way I can tell you in advance that it will not help to convert that Excel data to a MySQL table.
The model features (sync, compare etc.) all work on meta data only. They do not consider any table content. So instead you should do a textual comparison, by converting the table you have in the server to CSV.
Comparing such large documents is however a challenge. If you only have a few changes then using a diff tool (visual like Araxis Merge or diff on the command line) may help. For larger changesets a small utility app (may self written) might be necessary.

Convert PostgreSQL bytea column to MySql blob

I am migrating a database from PostgresSql to MySql.
We were saving files in the database as PostgreSQL bytea columns. I wrote a script to export the bytea data and then insert the data into a new MySql database as a blob. The data is inserting into Mysql fine, but it is not working at the application level. However, the application should not care, as the data is exactly the same. I am not sure what is wrong, but I feel like it is some difference between MySql vs. PostgreSQL. Any help would be greatly appreciated.
This could really be a number of issues, but I can provide some tips in regards to converting binary data between sql vendors.
The first thing you need to be aware of is that each sql database vendor uses different escape characters. I suspect that your binary data export is using hex and you most likely have unwanted escape characters when you import to your new database.
I recently had to do this. The exported binary data was in hex and vendor specific escape characters were included.
In your new database, check if the text value of the binary data starts with an 'x' or unusual encoding. If it does you need to get rid of this. Since you already have the data inserting properly, to test, you can just write an sql script to remove any unwanted vendor specific escape characters from each imported binary data record in your new database. Finally, you may need to unhex each each new record.
So, something like this worked for me:
UPDATE my_example_table
SET my_blob_column = UNHEX(SUBSTRING(my_blob_column, 2, CHAR_LENGTH(my_blob_column)))
Note: The 2 in the SUBSTRING function is because the export script
was using hex and prepending '\x' as a vendor specific escape character.
I am not sure that will work for you, but it maybe worth a try.

What is the best way to store a pretty large JSON object in MySQL

I'm building a Laravel app the core features are driven with rather large JSON objects. (the largest ones are between 1000-1500 lines).
I know there are better data base choices than MySQL for storing files and blocks of data, but for various reasons I will need to use MySQL for the application.
So my question is, how to I store my JSON objects most effective in MySQL? I will not need to do any queries on the column that holds the data, there will be other columns for identifying it. Something like this:
id, title, created-at, updated-at, JSON-blobthingy
Any ideas?
You could use the JSON data type if you have MySQL version 5.7.8 or above.
You could store the JSON file on the server, and simply reference its location via MySQL.
You could use also one of the TEXT types.
The best answer i can give is to use MySQL 5.7. On this version the new column type JSON are supported. Which handles large JSON very well (obviously).
https://dev.mysql.com/doc/refman/5.7/en/json.html
You could compress the data before inserting it if you don't need it searchable. I'm using the 'zlib' library for that
Simply, you can use the type longblob which can handle up to 4GB of data for the column holding the large JSON object where you can insert, update, and read this column normally as if it is text or anything else!

Big data migration from Oracle to MySQL

I received over 100GB of data with 67million records from one of the retailers. My objective is to do some market-basket analysis and CLV. This data is a direct sql dump from one of the tables with 70 columns. I'm trying to find a way to extract information from this data as managing itself in a small laptop/desktop setup is becoming time consuming. I considered the following options
Parse the data and convert the same to CSV format. File size might come down to around 35-40GB as more than half of the information in each records is column names. However, I may still have to use a db as I cant use R or Excel with 66 million records.
Migrate the data to mysql db. Unfortunately I don't have the schema for the table and I'm trying to recreate the schema looking at the data. I may have to replace to_date() in the data dump to str_to_date() to match with MySQL format.
Are there any better way to handle this? All that I need to do is extract the data from the sql dump by running some queries. Hadoop etc. are options, but I dont have the infrastructure to setup a cluster. I'm considering mysql as I have storage space and some memory to spare.
Suppose I go in the MySQL path, how would I import the data? I'm considering one of the following
Use sed and replace to_date() with appropriate str_to_date() inline. Note that, I need to do this for a 100GB file. Then import the data using mysql CLI.
Write python/perl script that will read the file, convert the data and write to mysql directly.
What would be faster? Thank you for your help.
In my opinion writing a script will be faster, because you are going to skip the SED part.
I think that you need to setup a server on a separate PC, and run the script from your laptop.
Also use tail to faster get a part from the bottom of this large file, in order to test your script on that part before you run it on this 100GB file.
I decided to go with the MySQL path. I created the schema looking at the data (had to increase a few of the column size as there were unexpected variations in the data) and wrote a python script using MySQLdb module. Import completed in 4hr 40mins on my 2011 MacBook Pro with 8154 failures out of 67 million records. Those failures were mostly data issues. Both client and server are running on my MBP.
#kpopovbg, yes, writing script was faster. Thank you.

Pictures using Postgres and Xojo

I have converted from a MySQL database to Postgres. During the conversion, the picture column in Postgres was created as bytea.
This Xojo code works in MySQL but not Postgres.
Dim mImage as Picture
mImage = rs.Field("Picture").PictureValue
Any ideas?
I don't know about this particular issue, but here's what you can do to find out yourself, perhaps:
Pictures are stored as BLOBs in the database. Now, this means that the column must also be declared as BLOB (or a similar binary type). If it was accidentally marked as TEXT, this would work as long as the database does not get exported by other means. I.e, as long as only your Xojo code reads and writes to the record, using the PictureValue functions, that takes care of keeping the data in BLOB form. But if you'd then convert to another database, the BLOB data would be read as text, and in that process it might get mangled.
So, it may be relevant to let us know how you converted the DB. Did you perform a export as SQL commands and then imported it into Postgres by running these commands again? Do you still have the export file? If so, find a record with picture data in it and see if that data is starting with: x' and then contains hex byte code, e.g. x'45FE1200... and so on. If it doesn't, that's another indicator for my suspicion.
So, check the type of the Picture column in your old DB first. If that specifies a binary data type, then the above probably does not apply.
Next, you can look at the actualy binary data that Xojo reads. To do that, get the BlobValue instead of the PictureValue, and store that in a MemoryBlock. Do the same for a single picture, both with the old and the new database. The memoryblock should contain the same bytes. If not, that would suggest that the data was not transferred correctly. Why? Well, that depends on how you converted it.