I'm using delphi XE2 and working on a mysql database project.
I have a mysql database which has a table consisting of four columns.
I have two sample rows in this table.
I'm using a TDatabase, TQuery, TDatasource and a TDBGrid to connect to the databse with following source code:
dbgrid1.DataSource :=DataSource1 ;
datasource1.DataSet :=Query1 ;
database1.DatabaseName :='personDatabase';
database1.AliasName :='mysqlodbc';
database1.LoginPrompt :=false;
database1.Connected :=true;
query1.DatabaseName :=database1.DatabaseName;
query1.SQL.Clear;
query1.SQL.Add('select * from persondb.person;');
query1.Active :=true;
the problem is when I try to select all the columns and rows (with select * from persondb.person) and show them in a dbgrid, varchar columns are not being displayed and I only get the two int columns.
It's like varchar columns are not show-able for example the sql select fname from persondb.person will result in two single celled row in dbgrid. the result is the same with sql select fname, lname from persondb.person which is not even logical (cause I expected a 2X2 empty table).
I also changed the character set of the database which was utf8 to latin1 and thought maybe the problem is there but no luck there too.
I googled hours and not even a similar problem to mine. but I leaned that the normal behavior to expect is dbgrid showing varchar fields as (memo) which everyone is trying to overcome.
so any help is appreciated.
It happened to me view days ago. Using dbExpress or Ado connection instead of BDE is not a good idea, because it needs more time to learn and change the code. I use oracle (maybe similiar case with mysql). You should check your database structure.
In Oracle 11, dbgrid cannot display all columns with VARCHAR2 data type and CHAR unit. dbgrid just display data with BYTE unit. but in Oracle 9i, everything's fine.
So, the solution is change the unit (char to byte). Here is the sql statement for oracle :
ALTER TABLE USERX.TABLENAME MODIFY (COLUMNNAME VARCHAR2(50 BYTE));
Related
I'm trying to change how we make some transformations in our tables on RDS MySql. This table have 20 million records and 200 columns. We have a pipeline executed monthly where we download the table to an EC2, use python to do the transformation, then it is reuploaded.
Upon presenting dbt, boss wants to see it working because of the benefits: everything will stay on SQL (I am the only python person in our small 20 people company), will have documentation, automated tests and version control [all this is really needed at the moment]. I made it happen, wrote SQL on dbt that produces the same results of the python script and runs directly on the mysql database using this https://pypi.org/project/dbt-mysql/ adapter.
There are some problems and the one of them i think will start helping me most is about the boolean in mysql. I already know all that thing about boolean, tinyint(1), etc, etc. But all columns intended to be "boolean" are going to the tables as INT, and I want them as tinyint, because it is taking 4 times the space it should.
Edit: added more information thanks to feedback
My raw table comes with all columns as str, i'm trying to cast the correct types. As this one should be boolean, i expected it to be converted to tinyint(1). When I create a table via pandas and there is a bool column, the table column is tinyint(1). But when I try to do something like this in SQL, the column becomes int.
The code is really just that:
SELECT IF(myStrColumn = '1', TRUE, FALSE)
FROM myRawTable
The resulting column is given as int, but i wanted it to be tinyint(1) to represent boolean.
tinyint is not a valid type to be passed to cast as per documentation https://dev.mysql.com/doc/refman/8.0/en/cast-functions.html#function_cast so it doesn't work
After looking at the MySQL docs, I think you have two options:
Create a new, custom table materialization that allows you to leverage the MySQL syntax:
create table my_table (my_col tinyint) as select ...
Add a post-hook that narrows the column after you've created the table:
config(
materialized="table",
post_hook="alter table {{ this }} modify my_col tinyint"
)
For #1, there is a guide to creating materializations in the dbt docs, but it is a complex and advanced topic. I think the dbt-mysql adapter uses the vanilla/default table materialization in the global project. You may want to check out the MySQL incremental materialization macro, which is here.
Getting ready to get rid of a MySQL database and switch to Oracle SQL. I am using Oracle SQL Developer. Need to get the records from a MySQL table and populate its corresponding table in SQL.
I was able to establish a Database connection in SQL Developer to the MySQL database. I checked the connection by doing a simple SELECT * from the table to make sure it returned all the records.
However, the new Oracle SQL table has quite a few changes - the names in the MySQL table all had a "tn" prefix, ie tnStore, tnConfigDate, etc. The SQL table gets rid of that prefix. That is issue #1.
There will also be several new columns in the new table. That data will be added later from elsewhere. And the data will not be in the same order as the MySQL table.
How do a write up a SELECT INTO statement in SQL Developer to populate the SQL table with the data from the MySQL table and correlate the corresponding columns while leaving new fields blank for now?
Here is a way by programming but not sure how to make it in single query:
I hope we need to use data dictionary tables in oracle all_tab_columns and I am not sure in Mysql ( like similar table)
Get the column name from Mysql table by taking out prefix "tn" and
compare the column name with SQL table. (possible use an cusrsor)
If matched build SQL statement for SELECT INTO statement and blank
for new fields possibly in a loop.
Once done for all columns , execute that statement
Consider migrating the existing MySQL tables as-is straight to Oracle using SQL Developer. Then move/refactor the data around to your new-tables with desired column definitions using INSERT as SELECTs.
Could be considerably faster, plus once the 'raw' data is there, you can do your work over and over again, until you get it just right.
Note you can also simply drag-and-drop to move a MySQL table from it's connection to an existing Oracle database connection to move the table over (DDL, Data, or Both).
I'm trying to find a way to read(select) the data from a table that has a varchar datatype. The data is in Tibetan language. When I query the data it get ???s of different lengths. Surprisingly, when I use the predicate to filter on the string, it does it successfully but the output is still ???. This means that SQL Server is able to understand the filter criteria but it's just not able to show me the output. I'm really not sure what am I missing here.
Let me share the sample here:
--create this table in database with collation set to
--Latin1_General_100_CI_AS or SQL_Latin1_General_100_CI_AS
CREATE TABLE Locations
(Place varchar(64)NOT NULL);
GO
INSERT into Locations_2(Place) VALUES ('ཡུན་རིང་འཇལ་རྒྱུ་མ་བྱུང་།')
INSERT into Locations_2(Place) VALUES ('ཁྱེད་རང་ལུང་པ་ག་ནས་ཡིམ།')
INSERT into Locations_2(Place) VALUES ('ཤོགས་པ་བདེ་ལེགས།')
GO
SELECT place collate Chinese_PRC_CI_AI from locations
where place=N'ཤོགས་པ་བདེ་ལེགས།'
This shows me nothing. But the query below shows the output as ?????????
The only difference is that I am not using N.
SELECT place collate Chinese_PRC_CI_AI from locations
where place='ཤོགས་པ་བདེ་ལེགས།'
I have inserted various Tibetan words and searched them I do get the correct search results but the output is ???????????.
Finally, it all works well when I use the datatype as nvarchar in the create table section above.
This is SQL Server 2008 SP4 on Win server 2008 R2 with latest SP.
i have a insert statement pulling data from db link.
insert into table (a,b,c)
select a,b,c from table#mysqldb;
here column c is long type in mysql and in oracle its varchar
i tried to cast as varchar, substr(c, 1,2400), UTL_RAW.CAST_TO_VARCHAR2,dbms_lob.substr
none of them are working on oracle side.
tried cast on mysql read part no use.
Can someone tell me how to do this. Here Iam trying to convert long to varchar. we cannot load as clob as this table is used in many places and we cannot change things at so many places
Thanks.
i had to convert the target column to clob to handle this scenario
I have problem with MS Access 2007 table connected via ODBC to MySQL server (not Microsoft SQL Server).
If unique identifier in MySQL table is BIGINT - all cells content is displayed like this: "#Deleted".
I have found this article:
"#Deleted" errors with linked ODBC tables (at support.microsoft.com)
and it says:
The following are some strategies that you can use to avoid this
behavior:
Avoid entering records that are exactly the same except for the unique index.
Avoid an update that triggers updates of both the unique index and another field.
Do not use a Float field as a unique index or as part of a unique index because of the inherent rounding problems of this data type.
Do all the updates and inserts by using SQL pass-through queries so that you know exactly what is sent to the ODBC data source.
Retrieve records with an SQL pass-through query. An SQL pass-through query is not updateable, and therefore does not cause
"#Delete" errors.
Avoid storing Null values within any field making up the unique index of your linked ODBC table.
but I don't have any of these things "to avoid". My problem is in BIGINT. To make sure if this is it I created 2 tables, one with INT id, one with BIGINT. And this is it.
I can't change BIGINT to INT in my production database.
Is there any way to fix this?
Im using: Access 2007, mysql-connector-odbc-3.51.30-winx64, MySQL server 5.1.73.
You can try basing the form on an Access query, and converting the BIGINT to an INT using CInt() in the query. This happens before the form processing. Depending on your circumstance, you may need to convert to a string (CStr()) in the Query, and then manually handle validating a user has entered a number using IsNumeric. The idea is to trick the form into not trying to interpret the datatype, which seems to be your problem.
Access 2016 now supports BigInt: https://blogs.office.com/2017/03/06/new-in-access-2016-large-number-bigint-support/
It's 2019 and with the latest ODBC driver from Oracle (v 8.0.17) and Access 365 (v 16.0.11904), the problem still occurs.
When the ODBC "Treat BIGINT columns as INT columns" is ticked and in Access support for Bigint is enable in options, the Linked tables with Bigint #id columns (the primary key) shows as deleted. Ruby creates these by default, so we are loathe to fiddle with that.
If we disable the above two option, Access thinks the #id column bigint is a string and shows the data. But then the field type is not bigint or int anymore.
This is quite pathetic, since this problem is almost 10 years old now.
The MySQL driver has an option to convert BIGINT values to INT. Would this solve the issue for you?