I have a table named 'table1' with the columns:
CSMembers,BCID,Total,Email.
The excel sheet contains the data for this table in this format:
CSMembers BCID Total Email
abc 2,5,7,9,12,17,22,32 10,000 abc#gmail.com
xyz 1,3,5,7,9,12,17,20,22,33 12,500 xyz#gmail.com
pqr 2,5,7,9,12,17,22,32 11,000 pqr#gmail.com
ttt 2,5,7,9,12,17,22 9,800 ttt#gmail.com
the .csv file of this is :
CSMembers,BCID,Total,Email
abc,"2,5,7,9,12,17,22,32","10,000",abc#gmail.com
xyz,"1,3,5,7,9,12,17,20,22,33","12,500",xyz#gmail.com
pqr,"2,5,7,9,12,17,22,32","11,000",pqr#gmail.com
ttt,"2,5,7,9,12,17,22","9,800",ttt#gmail.com
I have used the following code:
load data local infile 'H:/abc.csv' into table table1
fields terminated by ','
optionally enclosed by '"'
lines terminated by '\n' ignore 1 lines
(CSMembers,BCID,Total,Email);
I am getting the following output:
CSMember BCID Total Email
abc 2 10 abc#gmail.com
xyz 1 12 xyz#gmail.com
pqr 2 11 pqr#gmail.com
ttt 2 9 ttt#gmail.com
But i need this output:
CSMembers BCID Total Email
abc 2,5,7,9,12,17,22,32 10,000 abc#gmail.com
xyz 1,3,5,7,9,12,17,20,22,33 12,500 xyz#gmail.com
pqr 2,5,7,9,12,17,22,32 11,000 pqr#gmail.com
ttt 2,5,7,9,12,17,22 9,800 ttt#gmail.com
Can anyone please tell me what is wrong?
if I should change the code or the csv file content or both?
plese help.
Your schema for the table is possibly wrong bcid and total is probably defined as an int of some kind instead of a string. Numeric fields won't know whether a comma is a separator to another field or simply dividing the number up so it is easily readable. Also input to numeric fields typically accepts the value up to the first non-numeric character i.e. the comma
Let's say I have a Text document. There are two columns. Column 1 contains a list of names while column contains a list of value relating to those names. The problem is column 1 may have same names repeating on different rows. This is not an error though.
For ex:
Frank Burton 13
Joe Donnigan 22
John Smith 45
Cooper White 53
Joe Donnigan 19
What are the ways to organize my data in a way that I would have column 1 with unique data names and column 2 with the values summed together relating column 1? What can I do if I have these data in excel?
For ex:
Frank Burton 13
Joe Donnigan 41
John Smith 45
Cooper White 53
Thanks a bunch!
In mySQL you could write a query similar to...
Select col1, Sum(col2) FROM TableName group by col1
In Excel you could use a pivot table to group the information together
Insert Pivot table, select range enter values as in image below.
I have a file called /tmp/files.txt in the following structure:
652083 8 -rw-r--r-- 1 david staff 1055 Mar 15 2012 ./Highstock-1.1.5/examples/scrollbar-disabled/index.htm
652088 0 drwxr-xr-x 3 david staff 102 May 31 2012 ./Highstock-1.1.5/examples/spline
652089 8 -rw-r--r-- 1 david staff 1087 Mar 15 2012 ./Highstock-1.1.5/examples/spline/index.htm
652074 0 drwxr-xr-x 3 david staff 102 May 31 2012 ./Highstock-1.1.5/examples/step-line
652075 8 -rw-r--r-- 1 david staff 1103 Mar 15 2012 ./Highstock-1.1.5/examples/step-line/index.htm
I want to insert the filename (col 9), filesize (col 7), and last_modified (col 8)into a mysql table, paths.
To insert the entire line, I can do something like:
LOAD DATA INFILE '/tmp/files.txt' INTO TABLE path
How would I selectively insert the required information into the necessary columns here?
Specify dummy MySQL user variables (e.g. #dummy1) as the target for the unwanted values.
LOAD DATA INFILE '/tmp/files.txt'
INTO TABLE path
(#d1, #d2, #d3, #d4, #d5, #d6, filesize, #mon, #day, #ccyy_or_hhmi, filename)
SET last_modified = CONCAT(#mon,' ',#day,' ',#ccyy_or_hhmi)
With that, the first six values from the input line are ignored (the values are assigned to the specified user variables, which we disregard.) The seventh value gets assigned to the filesize column, the eighth through tenth values (the month day and year/time are assigned to user variables, and then the eleventh value is assigned to the filename column.
Finally, we use an expression to concatenate the month, day and year/time values together, and assign it to the last_modified column. (NOTE: the resulting string is not guaranteed to be suitable for assigning to a DATE or DATETIME column, since that last value can either be a year, or it can be a time.)
(I've made the assumption that table path has columns named filesize, last_modified, and filename, and that there aren't other other columns in the table that need to be set.)
Followup
If the data to be loaded is the output of a find command, I would be tempted to use the -printf action of find, rather than -ls, so I would have control over the output produced. For example:
find . -type f -printf "%b\t%TY-%Tm-%Td %TH:%TM\t%p\n" >/tmp/myfiles.txt
That would give you three fields, separated by tabs:
size_in_blocks modified_yyyy_mm_dd_hh_mi filename
That would be very easy to load into a MySQL table:
LOAD DATA INFILE '/tmp/myfiles.txt'
INTO TABLE path
(filesize, last_modified, filename)
I have excel source and sql server table .
Excel Source column is
Mno Price1 Price2
111 10 20
222 30 25
333 40 30
444 34 09
555 23 abc
Sql server Table
Product table name
PId Mno Sprice BPrice
1 111 3 50
2 222 14 23
3 444 32 34
4 555 43 45
5 666 21 67
I want to compare excel source Mno(Model number) with sql server Product table Mno (Model number), and if it is same i want to update Sql server Product table SPrice and Bprice.
Please tell me what are the steps i want to do?
I want to validate that excel sheet also, because in excel Price2 column have string values
if it's string value i want to send mail which row data are wrong.
I am new for SSIS so please give me details.
Read your new data in a source, use a lookup component for existing data. Direct row matches to a oledb command for update, and a destination for your non-matches for inserts (if you want to enter new products).
Personally I think the simplest way to do this is to use a dataflow to bring the excel file into a staging table and do any clean up if need be. Then as the next step inmteh control flow have an Execute SQl task that does the update. Or if you need either an update or an insert if the record is new, use a Merge statement in the Execute SQl task.
You can use a Merge Join Transformation with a full outer join (remember to sort your datasets before they input to the Merge Join Transformation), then have the output go to a Conditional Split Transformation. The Conditional Split Transformation can determine whether or not a row needs to be updated, inserted, or deleted and direct the flow to the appropriate transform to do that.
This was off the top of my head, and there may be a simpler transform for this. I haven't had the opportunity to work with SSIS in almost a year, so I might be getting a bit rusty.