How do I get the average from multiple columns?
for example:
Columns: ID 125Hz 250Hz 500Hz 750Hz 1000Hz 1500Hz 2000Hz 3000Hz 4000Hz 6000Hz 8000Hz
Values: 1 92 82 63 83 32 43 54 56 54 34 54
I want to get the average of all the columns except the ID. How do I do that?
You have to manually add the columns since there's no available built-in functions for horizontal aggregation.
select (125Hz+250Hz+500Hz+750Hz+1000Hz+1500Hz+2000Hz+3000Hz+4000Hz+6000Hz+8000Hz)/11 as aveHz from table_name
In SQL-SERVER you can use this
DECLARE #total int
DECLARE #query varchar(550)
DECLARE #ALLColumns VARCHAR(500)
SET #ALLColumns = ''
----Build the string columns
SELECT #ALLColumns = #ALLColumns + '+' + '['+sc.NAME+']'
FROM sys.tables st
INNER JOIN sys.columns sc ON st.object_id = sc.object_id
WHERE st.name LIKE '%YOUR_TABLE_NAME%'
AND sc.NAME LIKE '[0-9]%';--[0-9]% just get the column that start with number
----Get the total number of column,
SELECT #total = count(*) FROM sys.tables st
INNER JOIN sys.columns sc ON st.object_id = sc.object_id
WHERE st.name LIKE '%YOUR_TABLE_NAME%'
AND sc.NAME LIKE '[0-9]%';--[0-9]% just get the column that start with number
SET #query = 'SELECT SUM('+ SUBSTRING(#ALLColumns,2,LEN(#ALLColumns))+')/'
+CAST(#total as varchar(4))+ ' AS [AVG]
FROM [YOUR_TABLE_NAME]
GROUP BY [ID]'
--SELECT #query
EXECUTE(#query)
This will execute a query like this one:
SELECT SUM([125Hz]+[250Hz]+[500Hz]+[750Hz]+[1000Hz]+[1500Hz]+[2000Hz]
+[3000Hz]+[4000Hz]+[6000Hz]+[8000Hz])/11 AS [AVG]
FROM [YOUR_TABLE_NAME] GROUP BY [ID]
UPDATE
Add a column to store the avg, I called it [AVG] and chage the value of #query to
SET #query = '
CREATE TABLE #Medition (ID int,[AVG] decimal(18,4))
INSERT INTO #Medition (ID,[AVG])
SELECT ID,SUM ('+ SUBSTRING(#ALLColumns,2,LEN(#ALLColumns))+')/'
+CAST(#total as varchar(10))
+ ' AS [AVG] FROM Medition GROUP BY ID
UPDATE YOUR_TABLE_NAME SET YOUR_TABLE_NAME.[AVG] = #Medition.[AVG]
FROM YOUR_TABLE_NAME INNER JOIN #Medition ON YOUR_TABLE_NAME.ID =#Medition.ID
DROP TABLE #Medition
'
Note: Build queries string is a little ugly
Another way to do it, without actually using the magic number 11, be it a little more verbose.
WITH t1 AS
(
SELECT * FROM myTable
WHERE (...) -- Should limit result to 1 row
),
t2 AS
(
SELECT col1 FROM t1
UNION ALL
SELECT col2 FROM t1
UNION ALL
(...)
)
SELECT AVG(col1) FROM t2;
this will display Average value of all that fields of each ID you have.
SELECT AVG(125Hz+250Hz+500Hz+750Hz+1000Hz+1500Hz+2000Hz+3000Hz+4000Hz+6000Hz+8000Hz)
AS Average FROM table
GROUP BY ID
SELECT sum(125Hz + 250Hz + 500Hz + 750Hz + 1000Hz + 1500Hz + 2000Hz + 3000Hz +
4000Hz + 6000Hz + 8000Hz)/11 as averageHz from TABLE
Related
Is it possible to get the max value of a single column which exists in the majority of tables within several different schemas?
If this were one or two tables, I could easily use:
SELECT 'schema1.table1' as rowsource, category, max(date_used) max_date_used from schema1.table1 group by category
UNION ALL
SELECT 'schema1.table2' as rowsource, category, max(date_used) max_date_used from schema1.table2 group by category
UNION ALL
SELECT 'schema2.table3' as rowsource, category, max(date_used) max_date_used from schema2.table3 group by category
UNION ALL
SELECT 'schema3.table4' as rowsource, category, max(date_used) max_date_used from schema3.table4 group by category
However, I am looking at having to query nearly 300 tables across 3 different schemas.
TIA for any advice/insight!
You could try to store all your required table from informationSchema into a temp table, then use Dynamic SQL to go through each one of it. For example
DROP TABLE IF EXISTS #Temp
CREATE TABLE #temp ---identity column will be used to iterate
(
id INT IDENTITY,
TableName VARCHAR(50),
SchemaName VARCHAR(20)
)
INSERT INTO #temp
SELECT TABLE_NAME,TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
-- choose your own results with where conditions
DECLARE #SQL VARCHAR(MAX) = ''
DECLARE #Count INT = 1
DECLARE #SchemaName VARCHAR(20)
DECLARE #Table VARCHAR(20)
WHILE #COUNT <= (SELECT COUNT(*) FROM #temp)
BEGIN
SELECT #SchemaName = SchemaName FROM #temp WHERE id = #Count
SELECT #table = TABLENAME FROM #temp WHERE id = #Count
SELECT #sql = #sql + 'SELECT category, max(date_used) max_date_used from ' + #SchemaName + '.' + #Table + ' group by category
UNION ALL '
SET #Count = #Count + 1
END
PRINT LEFT((#SQL),LEN(#SQL) - LEN('UNION ALL '))
After you check the printed result, change it to EXEC if you think that is correct for you.
I need to separate values and store them in different variables in SQL,
for example
a='3100,3101,3102,....'
And the output should be
x=3100
y=3101
z=3102
.
.
.
create function [dbo].[udf_splitstring] (#tokens varchar(max),
#delimiter varchar(5))
returns #split table (
token varchar(200) not null )
as
begin
declare #list xml
select #list = cast('<a>'
+ replace(#tokens, #delimiter, '</a><a>')
+ '</a>' as xml)
insert into #split
(token)
select ltrim(t.value('.', 'varchar(200)')) as data
from #list.nodes('/a') as x(t)
return
end
GO
declare #cad varchar(100)='3100,3101,3102'
select *,ROW_NUMBER() over (order by token ) as rn from udf_splitstring(#cad,',')
token rn
3100 1
3101 2
3102 3
The results of the Parse TVF can easily be incorporated into a JOIN, or an IN
Declare #a varchar(max)='3100,3101,3102'
Select * from [dbo].[udf-Str-Parse](#a,',')
Returns
RetSeq RetVal
1 3100
2 3101
3 3102
The UDF if needed (much faster than recursive, loops, and xml)
CREATE FUNCTION [dbo].[udf-Str-Parse] (#String varchar(max),#Delimiter varchar(25))
Returns Table
As
Return (
with cte1(N) As (Select 1 From (Values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) N(N)),
cte2(N) As (Select Top (IsNull(DataLength(#String),0)) Row_Number() over (Order By (Select NULL)) From (Select N=1 From cte1 a,cte1 b,cte1 c,cte1 d) A ),
cte3(N) As (Select 1 Union All Select t.N+DataLength(#Delimiter) From cte2 t Where Substring(#String,t.N,DataLength(#Delimiter)) = #Delimiter),
cte4(N,L) As (Select S.N,IsNull(NullIf(CharIndex(#Delimiter,#String,s.N),0)-S.N,8000) From cte3 S)
Select RetSeq = Row_Number() over (Order By A.N)
,RetVal = LTrim(RTrim(Substring(#String, A.N, A.L)))
From cte4 A
);
--Orginal Source http://www.sqlservercentral.com/articles/Tally+Table/72993/
--Much faster than str-Parse, but limited to 8K
--Select * from [dbo].[udf-Str-Parse-8K]('Dog,Cat,House,Car',',')
--Select * from [dbo].[udf-Str-Parse-8K]('John||Cappelletti||was||here','||')
I suggest you to use following query, it's much faster than other functions like cross apply and udf.
SELECT
Variables
,S_DATA
FROM (
SELECT
Variables
,CASE WHEN LEN(LIST2)>0 THEN LTRIM(RTRIM(SUBSTRING(LIST2, NUMBER+1, CHARINDEX(',', LIST2, NUMBER+1)-NUMBER - 1)))
ELSE NULL
END AS S_DATA
,NUMBER
FROM(
SELECT Variables
,','+COMMA_SEPARETED_COLUMN+',' LIST2
FROM Tb1
)DT
LEFT OUTER JOIN TB N ON (N.NUMBER < LEN(DT.LIST2)) OR (N.NUMBER=1 AND DT.LIST2 IS NULL)
WHERE SUBSTRING(LIST2, NUMBER, 1) = ',' OR LIST2 IS NULL
) DT2
WHERE S_DATA<>''
and also you should create a table 'NUMBER' before running the above query.
CREATE TABLE TB (Number INT)
DECLARE #I INT=0
WHILE #I<1000
BEGIN
INSERT INTO TB VALUES (#I)
SET #I=#I+1
END
I have OrderInfo table which contains OrderTime(date+time),OrderTrackDate(date),OrderTotal(sales amount) columns as shown in the following image.
1. Table1(Original Table)
Here is the code I have tried so far before pivoting.
SELECT CAST(DATEPART(DAY, OrderTime) as varchar)+'/'+ CAST(DATEPART(MONTH, OrderTime) as varchar)+'/'+CAST(DATEPART(year,OrderTime) as varchar) as daymonthyear,
ROUND(SUM(OrderTotal),2) AS Sales, COUNT(OrderTotal) AS Orders
,datepart(hour,OrderTime) as HH
FROM OrderInfo where OrderTime >= '5/24/2013' AND OrderTrackDate <='5/30/2013'
GROUP BY DATEPART(year, OrderTime),DATEPART(MONTH, OrderTime),DATEPART(day, OrderTime),datepart(hour,OrderTime)
Order By daymonthyear,HH
2. Table 2(Grouped according to Date,Hour from Table1)
How do I pivot dynamically and show sales amount per hour based on Table2?
DESIRED OUTPUT
First of all create a temp table to use it in 3 places - Select columns for pivot, Replace null with zero and inside pivot.
SELECT DISTINCT
SUM(ORDERTOTAL) OVER(PARTITION BY CAST(ORDERTIME AS DATE),DATEPART(HH,ORDERTIME)) [TOTAL],
CONVERT(varchar, CAST(ORDERTIME AS datetime), 103) [DATE],
DATEPART(HH,ORDERTIME) [HOUR],
'HH:'+CAST(DATEPART(HH,ORDERTIME) AS VARCHAR(3)) [HOURCOL]
INTO #NEWTABLE
FROM ORDERTBL
ORDER BY DATEPART(HH,ORDERTIME)
Now declare 2 variables to select columns for pivot and replace null with zero
DECLARE #cols NVARCHAR (MAX)
DECLARE #NullToZeroCols NVARCHAR (MAX)
SELECT #cols = COALESCE (#cols + ',[' + [HOURCOL] + ']',
'[' + [HOURCOL] + ']')
FROM (SELECT DISTINCT [HOUR],[HOURCOL] FROM #NEWTABLE) PV
ORDER BY [HOUR]
SET #NullToZeroCols = SUBSTRING((SELECT ',ISNULL(['+[HOURCOL]+'],0) AS ['+[HOURCOL]+']'
FROM(SELECT DISTINCT [HOUR],[HOURCOL] FROM #NEWTABLE GROUP BY [HOUR],[HOURCOL])TAB
ORDER BY [HOUR] FOR XML PATH('')),2,8000)
Now pivot the result
DECLARE #query NVARCHAR(MAX)
SET #query = 'SELECT [DATE],' + #NullToZeroCols + ' FROM
(
SELECT [HOURCOL],[TOTAL], [DATE] FROM #NEWTABLE
) x
PIVOT
(
SUM([TOTAL])
FOR [HOURCOL] IN (' + #cols + ')
) p
;'
EXEC SP_EXECUTESQL #query
SQL FIDDLE
I have 2 tables, T1 and T2. I want to join these 2 tables and return only 2 rows of data, replacing the integers in Item with their lookup values from T2.
Table T1
Item Date
------ ---------
1;4;5; 3/13/2013
1;2;3; 3/13/2013
Table T2
ID Desc
---- ------
1 Tree
2 Grass
3 Sand
4 Water
5 Bridge
Expected results:
Item Date
------------------ ---------
Tree;Water;Bridge; 3/13/2013
Tree;Grass;Sand; 3/13/2013
First, create a Split function which returns an integer and an order-preserving sequence number. Here is one example:
ALTER FUNCTION dbo.SplitInts
(
#List VARCHAR(MAX),
#Delimiter VARCHAR(32)
)
RETURNS TABLE
AS
RETURN
(
SELECT rn = ROW_NUMBER() OVER (ORDER BY Number),
Item = CONVERT(INT, Item)
FROM (SELECT Number, Item = LTRIM(RTRIM(SUBSTRING(#List, Number,
CHARINDEX(#Delimiter, #List + #Delimiter, Number) - Number)))
FROM (SELECT ROW_NUMBER() OVER (ORDER BY [object_id])
FROM sys.all_objects) AS n(Number)
WHERE Number <= CONVERT(INT, LEN(#List))
AND SUBSTRING(#Delimiter + #List, Number, 1) = #Delimiter
) AS y
);
GO
Then the following query does what you're after:
DECLARE #t1 TABLE
(
Item VARCHAR(MAX),
[Date] DATE -- terrible column name!
);
INSERT #t1 VALUES('1;4;5;','20130313'),('1;2;3;','20130313');
-- please use unambiguous date formats!
DECLARE #t2 TABLE
(
ID INT, -- another bad column name - what kind of ID?
[Desc] VARCHAR(255) -- another bad column name, this is a keyword!
);
INSERT #t2 VALUES(1,'Tree'),(2,'Grass'),
(3,'Sand'),(4,'Water'),(5,'Bridge');
;WITH x AS
(
SELECT t1.Item, Date, t2ID = i.Item, i.rn, n = t2.[Desc]
FROM #t1 AS t1 CROSS APPLY dbo.SplitInts(t1.Item, ';') AS i
INNER JOIN #t2 AS t2 ON i.Item = t2.ID
)
SELECT DISTINCT Item = (
SELECT n + ';' FROM x AS x2
WHERE x.Item = x2.Item
ORDER BY x2.rn FOR XML PATH,
TYPE).value(N'./type()[1]', N'varchar(max)'), [Date]
FROM x;
Strongly recommend you research normalization. A semi-colon-separated list is a terrible way to cram together independent values.
What i currently have is:
COUNT DETAILS:
CNT DTLID COUNT TOTAL QTY UNITPRICE AMOUNT
1 234 2222 1.20 32
1 12 123 2 21
What i want it to be like
CNT DTLID COUNT TOTAL QTY UNITPRICE AMOUNT
1 234,12 2222 , 123 1.20,2 32 + 21 = 53
I want to have comma Separated values and also want to use group by clause for amount column.
Currently What I`m upto is:
ALTER PROCEDURE [dbo].[sp_Tbl_CountDetail_SelectAll]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
select * from Tbl_CountDetail
inner join tbl_Contract
on
tbl_CountDetail.ContractID = tbl_Contract.ContractID
inner join tbl_Count
on
tbl_CountDetail.CountID = tbl_Count.CountID
where tbl_CountDetail.isDeleted = 0
and tbl_Contract.isdeleted = 0
END
Some friendly sample data
create table #CountDetails
(
DTLID int,
CNT int,
Qty int,
UnitPrice money,
Amount int
)
insert into #CountDetails
SELECT
1, 234, 2222, 1.20, 32
UNION ALL SELECT
1, 12, 123, 2, 21
Here's some code
SELECT
DTLIDs.DTLID,
CNTs =
ISNULL(
STUFF(
(
select ',' +
cast(CD.cnt as varchar(50))
from #CountDetails CD
where CD.DTLID = DTLIDs.DTLID
order by CD.CNT
FOR XML PATH('')
),
1, 1, '' --removes the leading ','
),
''
),
QTYs =
ISNULL(
STUFF(
(
select ',' +
cast(CD.qty as varchar(50))
from #CountDetails CD
where CD.DTLID = DTLIDs.DTLID
order by CD.Qty
FOR XML PATH('')
),
1, 1, '' --removes the leading ','
),
''
),
UnitPrices =
ISNULL(
STUFF(
(
select ',' +
cast(CD.UnitPrice as varchar(50))
from #CountDetails CD
where CD.DTLID = DTLIDs.DTLID
order by CD.UnitPrice
FOR XML PATH('')
),
1, 1, '' --removes the leading ','
),
''
),
AmountSum =
(
select SUM(Amount) from #CountDetails CD
where CD.DTLID = DTLIDs.DTLID
)
from (
select distinct DTLID from #CountDetails
) DTLIDs
There are various ways to tweak this. For example, the "AmountSum =" nested query code be done on a group by - I just like the more consistent look given the way the rest of the query is structured.
For the CSV lists, you didn't specify how you wanted it sorted. I've ordered by the values (eg ORDER BY CD.CNT) but you can change that to order by whatever you want. Similarly there are no spaces between the CSV values. You can tweak this by changing the select ',' to have a space in there and parameters to the STUFF command (change the second 1 to a 2).
Basically the FOR XML PATH('') bit takes the mini resultset its given and returns some text with no XML literals (due to the ''). This is tidied up using STUFF to remove the leading , at the start of the XML PATH result.
Hope this helps! :)
Can be done something like this....
DECLARE #VALUES NVARCHAR(1000),#UnitPrice nvarchar(100)
SELECT #VALUES = COALESCE(#VALUES + ',','') + CAST(COUNTQty AS NVARCHAR(50)),
#UnitPrice = COALESCE(#UnitPrice + ',','') + CAST(UnitPrice AS NVARCHAR(50)) FROM tableName
SELECT #VALUES as [CountQty],#Ids as [UnitPrice]
Have not checked group by price issue..!!