I'm trying to find a way to read(select) the data from a table that has a varchar datatype. The data is in Tibetan language. When I query the data it get ???s of different lengths. Surprisingly, when I use the predicate to filter on the string, it does it successfully but the output is still ???. This means that SQL Server is able to understand the filter criteria but it's just not able to show me the output. I'm really not sure what am I missing here.
Let me share the sample here:
--create this table in database with collation set to
--Latin1_General_100_CI_AS or SQL_Latin1_General_100_CI_AS
CREATE TABLE Locations
(Place varchar(64)NOT NULL);
GO
INSERT into Locations_2(Place) VALUES ('ཡུན་རིང་འཇལ་རྒྱུ་མ་བྱུང་།')
INSERT into Locations_2(Place) VALUES ('ཁྱེད་རང་ལུང་པ་ག་ནས་ཡིམ།')
INSERT into Locations_2(Place) VALUES ('ཤོགས་པ་བདེ་ལེགས།')
GO
SELECT place collate Chinese_PRC_CI_AI from locations
where place=N'ཤོགས་པ་བདེ་ལེགས།'
This shows me nothing. But the query below shows the output as ?????????
The only difference is that I am not using N.
SELECT place collate Chinese_PRC_CI_AI from locations
where place='ཤོགས་པ་བདེ་ལེགས།'
I have inserted various Tibetan words and searched them I do get the correct search results but the output is ???????????.
Finally, it all works well when I use the datatype as nvarchar in the create table section above.
This is SQL Server 2008 SP4 on Win server 2008 R2 with latest SP.
Related
Getting ready to get rid of a MySQL database and switch to Oracle SQL. I am using Oracle SQL Developer. Need to get the records from a MySQL table and populate its corresponding table in SQL.
I was able to establish a Database connection in SQL Developer to the MySQL database. I checked the connection by doing a simple SELECT * from the table to make sure it returned all the records.
However, the new Oracle SQL table has quite a few changes - the names in the MySQL table all had a "tn" prefix, ie tnStore, tnConfigDate, etc. The SQL table gets rid of that prefix. That is issue #1.
There will also be several new columns in the new table. That data will be added later from elsewhere. And the data will not be in the same order as the MySQL table.
How do a write up a SELECT INTO statement in SQL Developer to populate the SQL table with the data from the MySQL table and correlate the corresponding columns while leaving new fields blank for now?
Here is a way by programming but not sure how to make it in single query:
I hope we need to use data dictionary tables in oracle all_tab_columns and I am not sure in Mysql ( like similar table)
Get the column name from Mysql table by taking out prefix "tn" and
compare the column name with SQL table. (possible use an cusrsor)
If matched build SQL statement for SELECT INTO statement and blank
for new fields possibly in a loop.
Once done for all columns , execute that statement
Consider migrating the existing MySQL tables as-is straight to Oracle using SQL Developer. Then move/refactor the data around to your new-tables with desired column definitions using INSERT as SELECTs.
Could be considerably faster, plus once the 'raw' data is there, you can do your work over and over again, until you get it just right.
Note you can also simply drag-and-drop to move a MySQL table from it's connection to an existing Oracle database connection to move the table over (DDL, Data, or Both).
i have a insert statement pulling data from db link.
insert into table (a,b,c)
select a,b,c from table#mysqldb;
here column c is long type in mysql and in oracle its varchar
i tried to cast as varchar, substr(c, 1,2400), UTL_RAW.CAST_TO_VARCHAR2,dbms_lob.substr
none of them are working on oracle side.
tried cast on mysql read part no use.
Can someone tell me how to do this. Here Iam trying to convert long to varchar. we cannot load as clob as this table is used in many places and we cannot change things at so many places
Thanks.
i had to convert the target column to clob to handle this scenario
I have a problem migrating data from SQL server to MySQL. I have nvarchar columns in SQL server and am exporting them to a Unicode textfile. But when I am importing the column into an utf-8 table of MySQL I get an error for duplicate value: Mysql sees no difference between 'Kaneko, Shûsuke' and 'Kaneko, Shusuke'. I am trying to get these values into a unique column.
What's wrong?
must I use another charset in MySQL?
I also tried converting the textfile to utf8 before importing into MySQL, but still getting the same error.
It seems the problem in your Mysql Table creation. First use SHOW CREATE TABLE on mysql prompt and see its table structure. Have you used right charset and collate. You can read here mysql docs
Many times collation is indeed not only case insensitive, but also partly accent insensitive, so ñ = n. (as Joni Salonen points out, this is incorrect!) but á = a.
So we can use binary collation but its have own drawback.Binary collation compares your string exactly as strcmp() in C would do, if characters are different (be it just case or diacritics difference). The downside of it that the sort order is not natural.
An example of unnatural sort order (as in "binary" is) : A,B,a,b Natural sort order would be in this case e.g : A,a,B,b (small and capital variations of the sme letter are sorted next to each other)
The practical advantage of binary collation is its speed, as string comparison is very simple/fast. In general case, indexes with binary might not produce expected results for sort, however for exact matches they can be useful.
Use a binary collation for the specific column (possibly your best bet)
For ex-
drop table cc;
CREATE TABLE cc ( c CHAR(100) primary key ) DEFAULT CHARACTER SET utf8 COLLATE utf8_bin;
insert into cc values ( 'Kaneko, Shûsuke' );
insert into cc values ( 'Kaneko, Shusuke' );
i have the following where clause
WHERE
(
target.sender_name <> source.sender_name
)
target aliases a table in Microsoft SQL Server while source aliases an almost identical table on a MySQL Server linked via a Linked Server
Originally i had sender_name in both tables as a text field, trying to keep the 2 tables incidental, however the query that the above clause was when, when i tried to run it spat out
The data types text and text are incompatible in the not equal to operator.
doing a bit of looking around i learned that text in tsql was depreciated and i should be using varchar(max), doing so however outputted this error
The data types varchar(max) and text are incompatible in the not equal to operator.
looking around it seems that an equivalent is VARCHAR(65535) however using that would cause problems in a table which has 4 text fields and the error i'm getting i believe would just be the same for the other 3 fields that i have in the same clause
so i am wondering if there is an equivalent to varchar(max) in mysql which i can use in my clause with a varchar(max) field in tsql
NOTE: the query that the clause is in runs in tsql, it's just the table on the linked server which is MySQL
EDIT: what the clause goes into is an UPDATE query, it checks every field and updates record in which any of the data is different, it's only ever used to sync up the 2 tables so that i can localize the work on the servers rather than using a linked server to do stuff like INNER JOINS
I'm using delphi XE2 and working on a mysql database project.
I have a mysql database which has a table consisting of four columns.
I have two sample rows in this table.
I'm using a TDatabase, TQuery, TDatasource and a TDBGrid to connect to the databse with following source code:
dbgrid1.DataSource :=DataSource1 ;
datasource1.DataSet :=Query1 ;
database1.DatabaseName :='personDatabase';
database1.AliasName :='mysqlodbc';
database1.LoginPrompt :=false;
database1.Connected :=true;
query1.DatabaseName :=database1.DatabaseName;
query1.SQL.Clear;
query1.SQL.Add('select * from persondb.person;');
query1.Active :=true;
the problem is when I try to select all the columns and rows (with select * from persondb.person) and show them in a dbgrid, varchar columns are not being displayed and I only get the two int columns.
It's like varchar columns are not show-able for example the sql select fname from persondb.person will result in two single celled row in dbgrid. the result is the same with sql select fname, lname from persondb.person which is not even logical (cause I expected a 2X2 empty table).
I also changed the character set of the database which was utf8 to latin1 and thought maybe the problem is there but no luck there too.
I googled hours and not even a similar problem to mine. but I leaned that the normal behavior to expect is dbgrid showing varchar fields as (memo) which everyone is trying to overcome.
so any help is appreciated.
It happened to me view days ago. Using dbExpress or Ado connection instead of BDE is not a good idea, because it needs more time to learn and change the code. I use oracle (maybe similiar case with mysql). You should check your database structure.
In Oracle 11, dbgrid cannot display all columns with VARCHAR2 data type and CHAR unit. dbgrid just display data with BYTE unit. but in Oracle 9i, everything's fine.
So, the solution is change the unit (char to byte). Here is the sql statement for oracle :
ALTER TABLE USERX.TABLENAME MODIFY (COLUMNNAME VARCHAR2(50 BYTE));