Implementing condition in SSIS - ssis

I'm importing data from txt to Sql Server table. That part works good.
Everyday this txt file is being deleted and new txt file is formed (i.e. yesterday there was data for 3 February, today for 4 February (column Date)).
When I run package, I want it to check whether Date column exists in database table. If it exists, skip, don't import, if it doesn't - import. And I want to save that Date value in a variable for further manipulations. How can I accomplish that?

we suppose you have your source file with the format and data as bellow
id | product | dateLoad
1 | dell | 25-01-2016 16:23:14
2 | hp | 25-01-2016 16:23:15
3 | lenovo | 25-01-2016 16:23:16
and your destination have the format as bellow
create table stack(id int,product varchar(20),dateLoad smalldatetime);
In your SSIS add a Derived Column to convert the smalldatetime to date like this :
Secondly add a Lookup in General Tab in your Lookup transformation Editor go to Specify how to handle rows with no matching entries and select Redirect rows to no match output. In Connection Tab add a connection to target table and i wrote a Sql query to convert the smalldatetime to date show the picture as bellow :
In Column tab do this :
Finally add a connection with the lookup and your target table and select Lookup no matching output
In the first execution i have 3 rowsinserted because i don't have the date in my table
I execute another time but i had 0 rows because i have the date in my table
I hope that help you

Related

MySql for Excel - Imported order error

I'm using MySQL for Excel plug-in (1.3.7 version) to import data from my MySQL database into my excel however, excel is only changing the order of the columns (alphabetical order) while the data remain in the same order.
The data rows appear in the order I want, but the header row is wrong!
For example:
If my table is (in MySQL Workbench):
id | Name | Date_of_birth
01 | Test1 | 01/01/2001
02 | Test2 | 02/02/2002
Excel Tries to import as:
Date_of_birth | id | Name ---> (ALPHABETICAL ORDER)
01 | Test | 01/01/2001
02 | Test2 | 02/02/2002
Because the "Name" column is a varchar(100), it does not accept DATE type values below it.
So, I can not import anything into my excel.
The only way that I've found to solve my problem is to put my table in alphabetical order (inside the MYSQL Workbench). But it is very inefficient and I don't want to do that with every table.
Could you help me?
Thank you very much for your attention.
If you are copying and pasting, try using the "text to columns" button in Excel under the Data tab.
Excel shouldn't be sorting these automatically. Start with a blank worksheet if you can and see if you have the same problem.
Otherwise, please post how you are moving the data from Workbench to Excel. It's likely that is the problem.
Got stuck on this for a while. I am surprised I could not find more complaints about this issue.
Deselcting the mysql addin, restarting excel and then reselecting the addin did the trick for me.
Mysql addin
File->options-> addins- manage->com add ins ->go

importing csv into mysql via cron

I am a complete noob when it comes to mysql databases.
What i want to achieve - is this - i have a sap b1 database and i am going to be exporting data from the sql server out to a csv - from there i will send this csv through to my web server.
Now what i want to do is to load up the csv into a mysql database on a scheduled basis (daily) via a cron job.
Here is the data that i will likely have in multiple csvs:
orders
invoices
credits
payments
Would i create a database for each or have them all within one database within phpmyadmin?
Also - let's take orders for example - would i create two tables - one for the order header information and another for order lines?
An example of the invoices csv would be the following format:
customernumber
customername
invoicenumber
purchaseordernumber
documentdate
freightamount
productcode
productname
barcode
quantity
price ex tax
price inc tax
RRP price
tax amount
doc total inc tax
Once in the tables - i will then go about developing a secure website/ application for my company that will be used by internal staff as well as customers.
Any advice would be appreciated.
Regards
Rick
One way to look at CSV files is that each is a table:
Header1,Header2,Header3
Value1,Value2,Value3
...,...,...
->
Header1 | Header2 | Header3
---------------------------
Value1 | Value2 | Value3
... | ... | ...
In mysql, a single database can have many tables. So for your example, you may want to have a single database with a table for each CSV file.

Importing a CSV file into a table with a different number of columns without a bridge / temp table

Say I have a CSV file with 3 columns and a destination table with 5 columns (3 identical to the CSV columns, and 2 more). All rows have data for the same number of columns.
CSV:
id | name | rating
---|-----------|-------
1 | radiohead | 10
2 | bjork | 9
3 | aqua | 2
SQL table:
id | name | rating | biggest_fan | next_concert
Right now, in order to import the CSV file, I create a temporary table with 3 columns, then copy the imported data into the real table. But this seems silly, and I can't seem to find any more efficient solution.
Isn't there a way to import the file directly into the destination table, while generating NULL / default values in the columns that appear in the table but not in the file?
I'm looking for a SQL / phpMyAdmin solution
No, I don't think there's a better way. A different way would be to use a text manipulating program (sed, awk, perl, python,...) to add two commas to the end of each line; even if your column order didn't match, phpMyAdmin has a field for changing the order when importing a CSV. However, it seems to still require the proper number of columns. Whether that's more or less work than what you're already doing is up to you, though.

MySql - Load Local Data Infile - how to avoid insertion of row containing invalid data

I'm importing data from .csv file to Mysql db using "LOAD DATA LOCAL INFILE" query.
.csv contains foll:
ID | Name | Date | Price
1. 01 | abc | 13-02-2013 | 1500
2. 02 | nbd | blahblahbl | 1000
3. 03 | kgj | 11-02-2012 | jghj
My Mysql Table contains following columns:
Id INTEGER
NAME Varchat(100)
InsertionTimeStamp Date
Price INTEGER
MySQL query to load .csv data to the table above :
LOAD DATA LOCAL INFILE 'UserDetails.csv' INTO TABLE userdetails
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(Id,Name,InsertionTimeStamp,Price)
set InsertionTimeStamp = str_to_date(#d,'%d-%m-%Y');
When I executed the query, 3 records got inserted into the table with 2 warnings
a) Incorrect datetime value : 'blahblahbl' for function str_to_date
b) Data truncate at row 3 (because of invalid INTEGER for Price column)
Question
1. Is there any way to avoid data being inserted in table which shows warnings/errors or the row which has invalid data
I dont want Row 2 & Row 3 to be inserted as it contains invalid data
2. For WARNING of Incorrect datetime value above, can I get the row no also?
Basically I want to get exact warning/error with the row number.
I think it would be way more easy if you'd validate the input with some other language(for example php).
You'd just have to iterate through the lines of the csv and call something like this , this and this!
If you just have to fiddle with SQL, this might help!
You can try Data Import tool (CSV or TXT import format) in dbForge Studio for MySQL.
In Data Import wizard on Modes page uncheck Use bulk insert oprion, on Errors handling page check Ignore all errors option, it will help you to skip import errors.
I know you are trying to skip problematic rows but don't you think you have a mistake in your LOAD DATA command? Should it be:
LOAD DATA LOCAL INFILE 'UserDetails.csv' INTO TABLE userdetails
FIELDS TERMINATED BY ','
IGNORE 1 LINES
(Id,Name,#d,Price)
set InsertionTimeStamp = str_to_date(#d,'%d-%m-%Y');
Shouldn't you be using the variable name (d) in the list of columns instead of the actual name of the column (InsertionTimeStamp)? That could be the reason why you are getting the error message about datetime.

mySQL: mysqlimport to import comma delimited file - first column = ID which is NOT in the file to be imported

Hi Folks I am trying to import a very large file that has moisture data recorded daily per minute for 20 cities in the US.
I have 1 table that I named "cityname" and this table has 2 columns:
-city_ID <- INT and is the primary key which increments automatically
-city_name <- character
I have created another table named "citymoisture" and this table has 7 columns:
-city_ID <- INT and is the primary key but does NOT increment automatically
-date_holding VARCHAR(30)
-time_holding VARCHAR(30)
-open
-high
-low
-close
The date_holding is meant to house the date data but because the format isnt what mysql expects (i.e. it is m/d/y) I want to initially store it in this column and then convert it later (unless there is a way to convert it while the data is being imported???). Similarly the time_holding column holds the time which appears as hh:mm:ss AM (or PM). I want to only import the hh:mm:ss and leave out whether it is AM or (PM).
In any case the file that I want to import has SIX columns:
date, time, open, high, low, close.
I want to ensure that the data being imported has the correct city_ID set to match the city_ID in the 'cityname' table. So for example:
city_ID city_name
20 Boston
19 Atlanta
So when the moisture data for Boston is imported into the citymoisture table the city_ID column is set to 20. Similarly when the data for Atlanta is imported into the citymoisture table the city_ID column is set to 19. The citymoisture table will be very large and will store the 1 minute moisture data for 20 cities going forward.
So my questions are:
1) is there a way to import the contents of the files into column 2-7 and manually specify the the value of the first column (city_ID)?
2) any way to convert dates on the fly while I import or do I have to first store the data and then convert and store to what would then be a final table.
3) same question as #2 but for the time column.
I greatly appreciate your help.
THe sample of the moisture data file appears below:
1/4/1999,9:31:00 AM,0.36,0.43,0.23,0.39
1/4/1999,9:32:00 AM,0.39,0.49,0.39,0.43
.
.
.
I'm not sure how the city_ID in the citymoisture table is going to get set. But if there was a way to do that then I can run join queries based on both tables i.e. there is one record per city per date/time.
STR_TO_DATE should work for getting your date and time
mysql> SELECT STR_TO_DATE('01/01/2001', '%m/%d/%Y');
+---------------------------------------+
| STR_TO_DATE('01/01/2001', '%m/%d/%Y') |
+---------------------------------------+
| 2001-01-01 |
+---------------------------------------+
1 row in set (0.00 sec)
mysql> SELECT STR_TO_DATE('10:53:11 AM','%h:%i:%s %p');
+------------------------------------------+
| STR_TO_DATE('10:53:11 AM','%h:%i:%s %p') |
+------------------------------------------+
| 10:53:11 |
+------------------------------------------+
1 row in set (0.00 sec)
mysql>
How are you going to determine what city the data for each row belongs do "manually", can you include a sample row of what the import data file looks like? Assuming somehow you have the city_ID, (replace in code below):
It looks like you are going to want to use this: LOAD DATA INFILE
if the city you wanted to insert data for is Boston from a file named 'Boston.dat', and an entry exists on your cityname table:
SET #c_name = 'Boston';
SET #filename = CONCAT(#c_name,'.dat');
LOAD DATA INFILE #filename
INTO TABLE city_moisture
(#date, #time, open, high, low, close)
SET city_ID=(SELECT city_ID FROM TABLE cityname WHERE city_name=#c_name),
date=STR_TO_DATE(#date, '%m/%d/%Y'),
time=STR_TO_DATE(#time, '%H:%i:%s %p');
Leaving off the AM PM portion of the time just sounds like a bad idea.