Pictures using Postgres and Xojo - mysql

I have converted from a MySQL database to Postgres. During the conversion, the picture column in Postgres was created as bytea.
This Xojo code works in MySQL but not Postgres.
Dim mImage as Picture
mImage = rs.Field("Picture").PictureValue
Any ideas?

I don't know about this particular issue, but here's what you can do to find out yourself, perhaps:
Pictures are stored as BLOBs in the database. Now, this means that the column must also be declared as BLOB (or a similar binary type). If it was accidentally marked as TEXT, this would work as long as the database does not get exported by other means. I.e, as long as only your Xojo code reads and writes to the record, using the PictureValue functions, that takes care of keeping the data in BLOB form. But if you'd then convert to another database, the BLOB data would be read as text, and in that process it might get mangled.
So, it may be relevant to let us know how you converted the DB. Did you perform a export as SQL commands and then imported it into Postgres by running these commands again? Do you still have the export file? If so, find a record with picture data in it and see if that data is starting with: x' and then contains hex byte code, e.g. x'45FE1200... and so on. If it doesn't, that's another indicator for my suspicion.
So, check the type of the Picture column in your old DB first. If that specifies a binary data type, then the above probably does not apply.
Next, you can look at the actualy binary data that Xojo reads. To do that, get the BlobValue instead of the PictureValue, and store that in a MemoryBlock. Do the same for a single picture, both with the old and the new database. The memoryblock should contain the same bytes. If not, that would suggest that the data was not transferred correctly. Why? Well, that depends on how you converted it.

Related

Reading Encrypted data with Datastage Tool

Actually i need Your help in datastage 11.7 tool. i am reading a AES encrypted column from my source and type of column is nvarchar so when we start our job and read data from source. The job run Successfully and exactly same data is moved to my target data base with same column type.
And the Problem Actually occur is that when i query the data to check whether the my source and target values are same, the query does not show any result and visually if we look source,target value they are same value but sql statement return nothing and the database is Vertica.
Column value are special Alpha numeric and special characters like �D�&7��x��d$�Q
I'm not at all sure this is even properly possible via datastage - treated encrypted data and a varchar. Some DB's have internal keys that go with the data that require decrypting before extracting. I'm assuming that decrypting, transporting, landing and then encrypting is not an option.
But if I had to take a stab in the dark.
The very first thing I'd check is that the character set and collation is the same on both databases on a table level. A difference can result in blank results on the target side.
Also check that the NLS map in the datastage (map for stages and collation locale) is set accordingly. What that settings is, I don't know but making it the same in DataSTage and the DBs would be ideal ; Google. You need to comment on what is already set in the DB's. And run tests. I'm not sure the DataStage default of ISO-8859-1 will work.
Please post your solution if you find one.

Convert PostgreSQL bytea column to MySql blob

I am migrating a database from PostgresSql to MySql.
We were saving files in the database as PostgreSQL bytea columns. I wrote a script to export the bytea data and then insert the data into a new MySql database as a blob. The data is inserting into Mysql fine, but it is not working at the application level. However, the application should not care, as the data is exactly the same. I am not sure what is wrong, but I feel like it is some difference between MySql vs. PostgreSQL. Any help would be greatly appreciated.
This could really be a number of issues, but I can provide some tips in regards to converting binary data between sql vendors.
The first thing you need to be aware of is that each sql database vendor uses different escape characters. I suspect that your binary data export is using hex and you most likely have unwanted escape characters when you import to your new database.
I recently had to do this. The exported binary data was in hex and vendor specific escape characters were included.
In your new database, check if the text value of the binary data starts with an 'x' or unusual encoding. If it does you need to get rid of this. Since you already have the data inserting properly, to test, you can just write an sql script to remove any unwanted vendor specific escape characters from each imported binary data record in your new database. Finally, you may need to unhex each each new record.
So, something like this worked for me:
UPDATE my_example_table
SET my_blob_column = UNHEX(SUBSTRING(my_blob_column, 2, CHAR_LENGTH(my_blob_column)))
Note: The 2 in the SUBSTRING function is because the export script
was using hex and prepending '\x' as a vendor specific escape character.
I am not sure that will work for you, but it maybe worth a try.

Data Cleanse ENTIRE Access Table of Specific Value (SQL Update Query Issues)

I've been searching for a quick way to do this after my first few thoughts have failed me, but I haven't found anything.
My Issue
I'm importing raw client data into an Access database where the flat file they provide is parsed and converted into a standardized format for our organization. I do this for all of our clients, but this particular client's software gives us a file that puts "(NULL)" in every field that should be NULL. lol as a result, I have a ton of strings rather than a null field!
My goal is to do a data cleanse of the entire TABLE, rather than perform the cleanse at the FIELD level (as I do in my temporary solution below).
Data Cleanse
Temporary Solution:
I can't add those strings to our datawarehouse, so for now, I just have a query with an IIF statement check that replaces "(NULL)" with "" for each field (which took awhile to setup since the client file has roughly 96 fields). This works. However, we work with hundreds of clients, so I'd like to make a scale-able solution that doesn't require many changes if another client has a similar file; not to mention that if this client changes something in their file, I might have to redo my field specific statements.
Long-term Solution:
My first thought was an UPDATE query. I was hoping I could do something like:
UPDATE [ImportedRaw_T]
SET [ImportedRaw_T].* = ""
WHERE ((([ImportedRaw_T].* = "(NULL)"));
This would be easily scale-able, since for further clients I'd only need to change the table name and replace "(NULL)" with their particular default. Unfortunately, you can't use SELECT * with an update query.
Can anyone think of a work-around to the SELECT * issue for the update query, or have a better solution for cleansing an entire table, rather doing the cleanse at the field level?
SIDE NOTES
This conversion is 100% automated currently (Access is called via a watch folder batch), so anything requiring manual data manipulation / human intervention is out.
I've tried using a batch script to just cleanse the data in the .txt file before importing to Access - however, this caused an issue with the fixed-width format of the .txt, which has caused even larger issues with the automatic import of the file to Access. So I'd prefer to do this in Access if possible.
Any thoughts and suggestions are greatly appreciated. Thanks!
Unfortunately it's impossible to implement this in SQL using wildcards instead of column names, there is no such kind syntax.
I would suggest VBA solution, where you need to cycle thru all table fields and if field data type is string, generate and execute SQL UPDATE command for updating current field.
Also use Null instead of "", if you really need Nulls in the field instead of empty strings, they may work differently in calculations.

How can I insert a file into a table in SQL?

I haven't seen any data type that can store a file in SQL. Is there something like that? What I'm particularly talking about is that I want to insert into my table a source code. What is the best method to do it? It can be either stored in my database as a nicely formatted text, or better (what I actually want) to store it as a single file. Please note that I'm using MySQL.
It is best not to store a file in your SQL database but to store a path to the file in the server or any other UNC path that your application can retrieve by itself and do with it what ever is unnecessary.
see this: https://softwareengineering.stackexchange.com/questions/150669/is-it-a-bad-practice-to-store-large-files-10-mb-in-a-database
and this:
Better way to store large files in a MySQL database?
and if you still want to store the file on the DB.. here is an example:
http://mirificampress.com/permalink/saving_a_file_into_mysql
If you can serialized the file you can store it as binary and then deserialize when needed
http://dev.mysql.com/doc/refman/5.0/en/binary-varbinary.html
You can also use a BLOB (http://dev.mysql.com/doc/refman/5.0/en/blob.html) which has some differences. Normally I just store the file in the filesystem and a pointer in the DB, which makes serving it back via something like HTTP a bit easier and doesn't bloat up the Database.
Storing the file in a table only makes sense if you need to do searches in that code. In other cases, you should only store a file's URL.
If you want to store a text file, use the TEXT datatype. Since it is a source code, you may consider using the ASCII character set to save space - but be aware that this will cause character set conversions during your queries, and this affects performances. Also, if it is ASCII you can use REGEXP for searches (that operator doesnt work with multi-byte charsets).
To load the file, if the file is on the same server as MySQL, you can use the FILE() function within an INSERT.

Importing a CSV file into mysql. (Specifically about create table command)

I hava text file full of values like this:
The first line is a list of column names like this:
col_name_1, col_name_2, col_name_3 ......(600 columns)
and all the following columns have values like this:
1101,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1101,1,3.86,65,0.46418,65,0.57151...
What is the best way to import this into mysql?
Specifically how to come up with the proper CREATE TABLE command so that the data will load itself properly? What is the best generic data type which would take in all the above values like 1101 or 3.86 or 0.57151. I am not worried about the table being inefficient in terms of storage as I need this for a one time usage.
I have tried some of the suggestions in other related questions like using Phpmyadmin (it crashes I am guessing due to the large amount of data)
Please help!
Data in CSV files is not normalized; those 600 columns may be spread across a couple of related tables. This is the recommended way of treating those data. You can then use fgetcsv() to read CSV files line-by-line in PHP.
To make MySQL process the CSV, you can create a 600 column table (I think) and issue a LOAD DATA LOCAL INFILE statement (or perhaps use mysqlimport, not sure about that).
The most generic data type would have to be VARCHAR or TEXT for bigger values, but of course you would lose semantics when used on numbers, dates, etc.
I noticed that you included the phpmyadmin tag.
PHPMyAdmin can handle this out of box. It will decide "magically" which types to make each column, and will CREATE the table for you, as well as INSERT all the data. There is no need to worry about LOAD DATA FROM INFILE, though that method can be more safe if you want to know exactly what's going on without relying on PHPMyAdmin's magic tooling.
Try convertcsvtomysql, just upload your csv file and then you can download and/or copy the mysql statement to create the table and insert rows.