DATE
NAME
CX#
DATA
11/7/2021
Alex
CX55
1.34
11/7/2021
Linda
CX43
22.9
11/7/2021
Loki
CX109
3.43
11/8/2021
Alex
CX 12
23
11/8/2021
Linda
CX 113
2.49
What I am trying to do is, paste in a master sheet of data for a week, which is broken down by date, and by person. I need to formulate charts using data with a lookup where the data from Alex can be pulled and populated into another sheet for charting. Since the dates are going to be changing every week as new data is pasted/populated, I cannot do a normal vlookup and match based on unique static strings. Not only are dates changing, but even the names themselves day to day and week to week. Ideally I'd end up with something like this
DATE
NAME
DATA
11/7/2021
Alex
1.34
11/8/2021
Alex
23
using logic which is flexible enough to group when the data column is dynamic, as well as the name column. Maybe I am just not thinking if it in a way that makes sense with the functions available. Any guidance would help!
try:
=FILTER('Sheet1'!A:D, 'Sheet1'!B:B="Alex")
if your issue is that NAME column is not always under B column you can do:
=ARRAY_CONSTRAIN(QUERY({'Sheet1'!A:D,
FILTER('Sheet1'!A:D, 'Sheet1'!A1:D1="NAME")},
"where Col5 = 'Alex'", 0), 9^9, 4)
I need to convert data of two sheets from one excel into csv files. Data section starts from 8th row and 2nd column in sheet. Column header is on 7th row then data. How can this be done in Unix shell scripting.
https://linoxide.com/linux-how-to/methods-convert-xlsx-format-files-csv-linux-cli/
I read couple of article but none giving idea to start the read/convert of sheet from certain column and row
Excel data of sheet is as:
This is the information of employee in company FRDN
This is data of year 2019
EMPLOYEE_ID FIRST_NAME EMAIL PHONE_NUMBER HIRE_DATE JOB_ID SALARY COMMISSION_PCT MANAGER_ID DEPARTMENT_ID LAST_NAME
100 Steven SKING 515.123.4567 6/17/1987 AD_PRES 24000 90 King
101 Neena NKOCHHAR 515.123.4568 9/21/1989 AD_VP 17000 100 90 Kochhar
102 Lex LDEHAAN 515.123.4569 1/13/1993 AD_VP 17000 100 90 De Haan
103 Alexander AHUNOLD 590.423.4567 1/3/1990 IT_PROG 9000 102 60 Hunold
104 Bruce BERNST 590.423.4568 5/21/1991 IT_PROG 6000 103 60 Ernst
csv file is needed for the sheet of excel and data start from a certain row and column.
This command will help you to convert your Excel file into CSV.
libreoffice -display :0 --headless --convert-to csv --outdir "/path/" "/path/FileName.xls"
I have table as tab1 in mysql
Name Value
john 22
parker 334
tony 44
;
After exporting this table to .csv file the name column shows as
Name Value
"john" 22
"parker" 334
"tony" 44
Now how to eliminate those Quotation marks(") from the name column .
Thanks in advance
If you use Phpmyadmin....While exporting table change double quotes from the textbox Format-specific options:..available under "export options"
If you use Query try..MySQL - export to csv some columns with quotes and some without; OPTIONALLY ENCLOSED BY not working correctly
I have 2 tables with different number of columns, and I need to export the data using SSIS to a text file. For example, I have customer table, tblCustomers; order table, tblOrders
tblCustomers (id, name, address, state, zip)
id name address state zip’
100 custA address1 NY 12345
99 custB address2 FL 54321
and
tblOrders(id, cust_id, name, quantity, total, date)
id cust_id name quantity total date
1 100 candy 10 100.00 04/01/2014
2 99 veg 1 2.00 04/01/2014
3 99 fruit 2 0.99 04/01/2014
4 100 veg 1 3.99 04/05/2014
The result file would be as following
“custA”, “100”, “recordtypeA”, “address1”, “NY”, “12345”
“custA”, “100”, “recordtypeB”, “candy”, “10”, “100.00”, “04/01/2014”
“custA”, “100”, “recordtypeB”, “veg”, “1”, “3.99”, “04/05/2014”
“custB”, “99”, “recordtypeA”, “address2”, “FL”, “54321”
“custB”, “99”, “recordtypeB”, “veg”, “1”, “2.00”, “04/01/2014”
“custB”, “99”, “recordtypeB”, “fruit”, “2”, “0.99”, “04/01/2014”
Can anyone please guild me as how to do this?
I presume you meant "guide", not "guild" - I hope your typing is more careful when you code?
I would create a Data Flow Task in an SSIS package. In that I would first add an OLE DB Source and point it at tblOrders. Then I would add a Lookup to add the data from tblCustomers, by matching tblOrders.Cust_id to tblCustomers.id.
I would use a SQL Query that joins the tables, and sets up the data, use that as a source and export that.
Note that the first row has 6 columns and the second one has 7. It's generally difficult (well not as easy as a standard file) to import these types of header/detail files. How is this file being used once created? If it needs to be imported somewhere you'd be better of just joining the data up and having 10 columns, or exporting them seperately.
Let's say I have a Text document. There are two columns. Column 1 contains a list of names while column contains a list of value relating to those names. The problem is column 1 may have same names repeating on different rows. This is not an error though.
For ex:
Frank Burton 13
Joe Donnigan 22
John Smith 45
Cooper White 53
Joe Donnigan 19
What are the ways to organize my data in a way that I would have column 1 with unique data names and column 2 with the values summed together relating column 1? What can I do if I have these data in excel?
For ex:
Frank Burton 13
Joe Donnigan 41
John Smith 45
Cooper White 53
Thanks a bunch!
In mySQL you could write a query similar to...
Select col1, Sum(col2) FROM TableName group by col1
In Excel you could use a pivot table to group the information together
Insert Pivot table, select range enter values as in image below.