i am migrtaing database from sql server 2008 to teradata - sql-server-2008

I am migrating a database from Sql Server 2008 to Teradata
and I am facing a problem:
In Sql Server in the ddl of a table column is defined as follows:
[rowguid] uniqueidentifier ROWGUIDCOL NOT NULL CONSTRAINT [DF_Address_rowguid] DEFAULT (NEWID())
This column uses newid() function to generate and insert random varchar value in the column [rowguid] if the user doesnt provide any input.
There is no similar function in Teradata to generate this value.
What can be used instead of of NEWID() function of Sql Server while creating similar table ddls for Teradata?

There is no native equivalent for a GUID/UUID in Teradata. Teradata does offer an IDENTITY column to provide an auto-incrementing column. The IDENTITY column does not come without its own nuances and I would encourage you to read the Chapter 5 - Create Table in the SQL Data Definition Language - Detailed Topics which has a section explaining Identity Columns.
However, as part of your migration from SQL Server to Teradata you will need to understand the concept of how data is distributed in Teradata by means of the table's primary index. This may require that you review your existing data model and re-engineer how it is physically implemented in Teradata.

Related

Compare SQL Server database with MySQL database

I have migrated my existing SQL server database to MySQL server database using MySQL workbench Migration Wizard. Because these are two different database servers, I want to ensure there is no data loss along with stored procedures, triggers, and views, I mean everything is intact. I tried using the MySQL workbench Compare Schema wizard but that only works for two MySQL databases. Please suggest a way to achieve it.
First you should compare database schemas between SQL Server and MySQL to see if there is difference.
I don't think that field are missing, but perharps it needs data type adjustments, index adjestments, etc.
Second, once you fix database schema, you should verify imported datas, including:
total row number for each tables
Row contents
For both, the best is to write a verification script (PHP, node.js, python, etc.) that will list all tables of SQL Server, and for each table check row number, identity counter and then datas itself.
For moving or copying the MS SQL database table's schema into MySQL, you must map data types, find NULL constraint, and determine the field that is set as a PRIMARY KEY.
This procedure does not support conversion of indexes, foreign keys, identity columns, unique or other table constraints, and character set.
MySQL supports all the important MS SQL data types. However, there are some SQL server data types that do not match with MySQL data types. Some of the major data types you’ll need to map MySQL with are as follows:
SQL Server
MySQL
VARCHAR(max)
LONGTEXT
SQL_VARIANT
BLOB
IDENTITY
AUTO_INCREMENT
AUTO_INCREMENT
TEXT CHARACTER SET UTF8
SMALLDATETIME
DATETIME
DATETIMEOFFSET
TIMESTAMP
MONEY
DECIMAL(19,4)
UNIQUEIDENTIFIER
BINARY(16)
SYSNAME
CHAR(256)
Organizations may develop the need to migrate from MS SQL server to MySQL because of its rich feature-set, cross-platform and open source availability, and lower cost.
While migration from one database to another can be performed manually, it can be an extremely time-consuming and error-prone process.
A better alternative is to use specialized database converter software like Stellar Converter for Database, which is specially designed to help DBAs and developers automate the process of converting a database file format to another. The software converts table records and attributes from MS SQL to MySQL database quickly, while preserving database integrity.

Microsoft Access DDL for new bigint fields

Access 2016 added support for "Large Number" fields, a 64-bit integral type. How can I use CREATE TABLE or ALTER TABLE DDL statements to create fields of this type?
This is not possible as of January 2019. Using the Query Parameters dialog in a design screen, it's possible to get an exhaustive list of Access SQL types and and (by using sql view) their DDL equivalents:
There is no Access SQL data type available which corresponds to the field type Large Number.

SELECT INTO Oracle SQL table from MySQL table with SQL Developer

Getting ready to get rid of a MySQL database and switch to Oracle SQL. I am using Oracle SQL Developer. Need to get the records from a MySQL table and populate its corresponding table in SQL.
I was able to establish a Database connection in SQL Developer to the MySQL database. I checked the connection by doing a simple SELECT * from the table to make sure it returned all the records.
However, the new Oracle SQL table has quite a few changes - the names in the MySQL table all had a "tn" prefix, ie tnStore, tnConfigDate, etc. The SQL table gets rid of that prefix. That is issue #1.
There will also be several new columns in the new table. That data will be added later from elsewhere. And the data will not be in the same order as the MySQL table.
How do a write up a SELECT INTO statement in SQL Developer to populate the SQL table with the data from the MySQL table and correlate the corresponding columns while leaving new fields blank for now?
Here is a way by programming but not sure how to make it in single query:
I hope we need to use data dictionary tables in oracle all_tab_columns and I am not sure in Mysql ( like similar table)
Get the column name from Mysql table by taking out prefix "tn" and
compare the column name with SQL table. (possible use an cusrsor)
If matched build SQL statement for SELECT INTO statement and blank
for new fields possibly in a loop.
Once done for all columns , execute that statement
Consider migrating the existing MySQL tables as-is straight to Oracle using SQL Developer. Then move/refactor the data around to your new-tables with desired column definitions using INSERT as SELECTs.
Could be considerably faster, plus once the 'raw' data is there, you can do your work over and over again, until you get it just right.
Note you can also simply drag-and-drop to move a MySQL table from it's connection to an existing Oracle database connection to move the table over (DDL, Data, or Both).

Data cells "#Deleted" in Access - ODBC, MySQL and BIGINT unique ID

I have problem with MS Access 2007 table connected via ODBC to MySQL server (not Microsoft SQL Server).
If unique identifier in MySQL table is BIGINT - all cells content is displayed like this: "#Deleted".
I have found this article:
"#Deleted" errors with linked ODBC tables (at support.microsoft.com)
and it says:
The following are some strategies that you can use to avoid this
behavior:
Avoid entering records that are exactly the same except for the unique index.
Avoid an update that triggers updates of both the unique index and another field.
Do not use a Float field as a unique index or as part of a unique index because of the inherent rounding problems of this data type.
Do all the updates and inserts by using SQL pass-through queries so that you know exactly what is sent to the ODBC data source.
Retrieve records with an SQL pass-through query. An SQL pass-through query is not updateable, and therefore does not cause
"#Delete" errors.
Avoid storing Null values within any field making up the unique index of your linked ODBC table.
but I don't have any of these things "to avoid". My problem is in BIGINT. To make sure if this is it I created 2 tables, one with INT id, one with BIGINT. And this is it.
I can't change BIGINT to INT in my production database.
Is there any way to fix this?
Im using: Access 2007, mysql-connector-odbc-3.51.30-winx64, MySQL server 5.1.73.
You can try basing the form on an Access query, and converting the BIGINT to an INT using CInt() in the query. This happens before the form processing. Depending on your circumstance, you may need to convert to a string (CStr()) in the Query, and then manually handle validating a user has entered a number using IsNumeric. The idea is to trick the form into not trying to interpret the datatype, which seems to be your problem.
Access 2016 now supports BigInt: https://blogs.office.com/2017/03/06/new-in-access-2016-large-number-bigint-support/
It's 2019 and with the latest ODBC driver from Oracle (v 8.0.17) and Access 365 (v 16.0.11904), the problem still occurs.
When the ODBC "Treat BIGINT columns as INT columns" is ticked and in Access support for Bigint is enable in options, the Linked tables with Bigint #id columns (the primary key) shows as deleted. Ruby creates these by default, so we are loathe to fiddle with that.
If we disable the above two option, Access thinks the #id column bigint is a string and shows the data. But then the field type is not bigint or int anymore.
This is quite pathetic, since this problem is almost 10 years old now.
The MySQL driver has an option to convert BIGINT values to INT. Would this solve the issue for you?

How to load column names, data from a text file into a MySQL table?

I have a dataset with a lot of columns I want to import into a MySQL database, so I want to be able to create tables without specifying the column headers by hand. Rather I want to supply a filename with the column labels in it to (presumably) the MySQL CREATE TABLE command. I'm using standard MySQL Query Browser tools in Ubuntu, but I didn't see in option for this in the create table dialog, nor could I figure out how to write a query to do this from the CREATE TABLE documentation page. But there must be a way...
A CREATE TABLE statement includes more than just column names
Table name*
Column names*
Column data types*
Column constraints, like NOT NULL
Column options, like DEFAULT, character set
Table constraints, like PRIMARY KEY* and FOREIGN KEY
Indexes
Table options, like storage engine, default character set
* mandatory
You can't get all this just from a list of column names. You should write the CREATE TABLE statement yourself.
Re your comment: Many software development frameworks support ways to declare tables without using SQL DDL. E.g. Hibernate uses XML files. YAML is supported by Rails ActiveRecord, PHP Doctrine and Perl's SQLFairy. There are probably other tools that use other format such as JSON, but I don't know one offhand.
But eventually, all these "simplified" interfaces are no less complex to learn as SQL, while failing to represent exactly what SQL does. See also The Law of Leaky Abstractions.
Check out SQLFairy, because that tool might already convert from files to SQL in a way that can help you. And FWIW MySQL Query Browser (or under its current name, MySQL Workbench) can read SQL files. So you probably don't have to copy & paste manually.