how to import a data model from Excel - sql-server-2008

I was given an excel (csv) sheet containing a database metadata.
I'm asking if there's a simple way to import the csv and create the tables from there?
Data is not part of this question. the csv looks like this:
logical_table_name, physical_table_name, logical_column_name, physcial_column_name, data_type, data_length
There's about 2000 rows of metadata. I'm hoping I don't have to manually create the tables. Thanks.

I don't know of any direct import or creation. However, if I had to do this and I couldn't find one, I would import the excel file into a staging table (just a direct data import). I'd make add a unique auto ID column to staging table to keep the rows in order.
Then I would use some queries to build table and column creation commands from the raw data. Unless this was something I was setting up to do a lot, I would keep it dead simple, not try and get fancy. Build individual add column commands for each column. Build a create Table command for the first row for each table. Sort them all by the order id, tables before columns. Then you should be able to just copy the script column, check the commands, and go.

Related

Importing Excel Data into Access Using Index - How to Prevent Popup Message

I am trying to use an Index to import monthly data from Excel into a Table (by Appending new records to the Table).
I have created a unique index in the Table being posted to to prevent the import of duplicate records.
However when I click import (and assuming the records already exist) it still gives the user the option to import the same records again by clicking yes to the message below:
How would I get the macro to not run if the entries already exist and also not give the user the above message option?
I have tried suppressing messages temporarily using a bit of code as referred to below, but that just processes the import anyway and re-adds the entries to the table again.
Thanks
Excel imports often have issues (even if you haven't found any in this particular file), expecially if the Excel file is created or edited by humans.
My suggestion would be to import the file into a temporary table consisting of all text fields (F1, F2, etc) and an identity field.
Using all text fields, all data should import without any conversion problems.
Don't use the first row as column heads. Import it as the first row of data. That way you can easily check for correct column headings.
If correct, just delete that row.
Then validate and scrub the data using queries and/or code against the temporary table to identify any data that does not meet the requirements of your permanent tables. E.G. Select * From WHERE Not IsNumeric(F3).
Then remove any leading or trailing spaces, double spaces, etc. and whatever other cleanup you might like to do.
Then use a query with an outer join (on your PK fields) to append rows from the temp table, that are not already in the permanent table, to the permanent table.

Fastest-Cleanest way to update database (mysql large tables)

I have a website feeded with large mysql tables (>50k of rows in some tables). Lets name one table "MotherTable". Every night I update the site with a new csv file (produced locally) that has to substitute "MotherTable" data.
The way I do this currently (I am not an expert, as you see), is:
- First, I TRUNCATE the MotherTable table.
- Second, I import the csv file to the empty table, with columns separated by "/" and skipping 1 line.
As the csv file is not very small, there are some seconds (or even a minute) when the MotherTable is empty, so the web users that make SELECTS on this table find nothing.
Obviously, I don't like that. Is there any procedure to update MotherTable in a way users note nothing? If not, what would be the quickest way to update the table with the new csv file?
Thank you!

Bulk CSV File Import in MySQL Removing duplicates while Importing dynamic colmns from CSV

I have to import CSV File for different clients in my system, some with [,] some [|] Etc… separated. Always very big files.
While importing I need to filter duplicate records, duplicate records should not insert in dB.
Problem is columns can be different for different clients. I have database for every client for CSV DATA which I import every day, Week or month depending on client need. I keep data for every import so we can generate reports & what data we receive in CSV file our system do processing after import.
Data structure example:
Client 1 database:
First_Name | Last_Name | Email | Phone | Ect…
95% data always same every new CSV file. Some records new comes & some records they delete from csv. so our system only process those records those newly imported .
Currently what happening we import data every time in new table. We keep table name with time stamp so we can keep track for import. it Is expensive process, it duplicate records and tables.
Im thinking solution and I need your suggestion on it.
Keeping one table every time import when I import CSV file data in table, I’ll alter existing table add new column, column name will be “current date” (byte or Boolean) add true false on import??
My other question is first time when I import CSV file …I need to write script:
While importing CSV data, if table already exists then my date logic will work else if table does not exist it should create table given or provided “client name” as table name. Challenge is columns I don’t know it should create columns from CSV file.
Table already exist some new records came in it should insert else update.
Is it do able in mysql??
although I have to do something for mssql also but right now I need solution for my sql.
Please help me... im not good in MySQL :(
You can certainly do an insert Or update statement when importing each record.
see here :
https://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
I propose you create a script to dynamically create you table if it doesn't
What is the language that you would use to insert your csv?

infer table structure from file in MySql

Another posting said there is a way to infer the table columns from a data file using phpMyAdmin. I haven't found documentation on this, can you point me to it? Does it only use the header row, or does it also sample the data to infer the data type?
I'm trying to create several tables in MySQL from data files, which have roughly 100 columns each, so I don't want to write the SQL DDL to create the tables manually.
Thanks!

Can I import tab-separated files into MySQL without creating database tables first?

As the title says: I've got a bunch of tab-separated text files containing data.
I know that if I use 'CREATE TABLE' statements to set up all the tables manually, I can then import them into the waiting tables, using 'load data' or 'mysqlimport'.
But is there any way in MySQL to create tables automatically based on the tab files? Seems like there ought to be. (I know that MySQL might have to guess the data type of each column, but you could specify that in the first row of the tab files.)
No, there isn't. You need to CREATE a TABLE first in any case.
Automatically creating tables and guessing field types is not part of the DBMS's job. That is a task best left to an external tool or application (That then creates the necessary CREATE statements).
If your willing to type the data types in the first row, why not type a proper CREATE TABLE statement.
Then you can export the excel data as a txt file and use
LOAD DATA INFILE 'path/file.txt' INTO TABLE your_table;