Indexing issue in sql server - sql-server-2008

hey guys,
i have a query in sql server which takes atleast 10-15 seconds to execute, and when this is called in asp.net, it is more worst there, it just throws request timeout error.
Below is the query i am using.
SELECT C.Id,
C.Summary,
C.Title,
C.Author,
CONVERT(VARCHAR(12), C.PublishDate, 104)
AS 'DATE',
'/Article/' + SUBSTRING(dbo.RemoveSpecialChars(C.Title), 0, 10) + '/' + CAST(CA.CategoryId AS VARCHAR(MAX)) + '/' + CAST(C.Id AS VARCHAR(MAX)) +
'.aspx' AS
'URL'
FROM CrossArticle_Article C
INNER JOIN CrossArticle_ArticleToCategory CA
ON C.Id = CA.ArticleId
WHERE C.Title LIKE '%' + #KEYWORD + '%'
OR C.Summary LIKE '%' + #KEYWORD + '%'
OR C.Article LIKE '%' + #KEYWORD + '%'
SELECT ##ROWCOUNT
Below are the Fields Specification.
Id int Primary Key
Summary nvarchar(1000)
Title nvarchar(200)
Author nvarchar(200)
PublishDate DateTime
CategoryId int PrimaryKey
i think this can be resolved by using Indexing on these columns using include.. i checked over net, but didnt find any solution..
i would appreciate if i could get help for the same.
Thanks and Regards
Abbas Electricwala

Ordinary column indexing most likely cannot help your query, unfortunately. LIKE conditions can only be assisted by indexes when they are in the form of value% (meaning that you can only have a wildcard on the end of the expression; the prefix must be static).
I am assuming that you already have an index on CrossArticle_Article.Id and CrossArticle_ArticleToCategory.ArticleId. If not, you should add those.

Related

How can I use FOR JSON to build JSON in this format?

I'd like to use FOR JSON to build a data payload for an HTTP Post call. My Source table can be recreated with this snippet:
drop table if exists #jsonData;
drop table if exists #jsonColumns;
select
'carat' [column]
into #jsonColumns
union
select 'cut' union
select 'color' union
select 'clarity' union
select 'depth' union
select 'table' union
select 'x' union
select 'y' union
select 'z'
select
0.23 carat
,'Ideal' cut
,'E' color
,'SI2' clarity
,61.5 depth
,55.0 [table]
,3.95 x
,3.98 y
,2.43 z
into #jsonData
union
select 0.21,'Premium','E','SI1',59.8,61.0,3.89,3.84,2.31 union
select 0.29,'Premium','I','VS2',62.4,58.0,4.2,4.23,2.63 union
select 0.31,'Good','J','SI2',63.3,58.0,4.34,4.35,2.75
;
The data needs to be formatted as follows:
{
"columns":["carat","cut","color","clarity","depth","table","x","y","z"],
"data":[
[0.23,"Ideal","E","SI2",61.5,55.0,3.95,3.98,2.43],
[0.21,"Premium","E","SI1",59.8,61.0,3.89,3.84,2.31],
[0.23,"Good","E","VS1",56.9,65.0,4.05,4.07,2.31],
[0.29,"Premium","I","VS2",62.4,58.0,4.2,4.23,2.63],
[0.31,"Good","J","SI2",63.3,58.0,4.34,4.35,2.75]
]
}
My attempts thus far is as follows:
select
(select * from #jsonColumns for json path) as [columns],
(select * from #jsonData for json path) as [data]
for json path, without_array_wrapper
However this returns arrays of objects rather than values, like so:
{
"columns":[
{"column":"carat"},
{"column":"clarity"},
{"column":"color"},
{"column":"cut"},
{"column":"depth"},
{"column":"table"},
{"column":"x"},
{"column":"y"},
{"column":"z"}
]...
}
How can I limit the arrays to only showing the values?
Honestly, this seems like it's going to be easier with string aggregation rather than using the JSON functionality.
Because you're using using SQL Server 2016, you don't have access to STRING_AGG or CONCAT_WS, so the code is a lot longer. You have to make use of FOR XML PATH and STUFF instead and insert all the separators manually (why there's so many ',' in the CONCAT expression). This results in the below:
DECLARE #CRLF nchar(2) = NCHAR(13) + NCHAR(10);
SELECT N'{' + #CRLF +
N' "columns":[' + STUFF((SELECT ',' + QUOTENAME(c.[name],'"')
FROM tempdb.sys.columns c
JOIN tempdb.sys.tables t ON c.object_id = t.object_id
WHERE t.[name] LIKE N'#jsonData%' --Like isn't needed if not a temporary table. Use the literal name.
ORDER BY c.column_id ASC
FOR XML PATH(N''),TYPE).value('.','nvarchar(MAX)'),1,1,N'') + N'],' + #CRLF +
N' "data":[' + #CRLF +
STUFF((SELECT N',' + #CRLF +
N' ' + CONCAT('[',JD.carat,',',QUOTENAME(JD.cut,'"'),',',QUOTENAME(JD.color,'"'),',',QUOTENAME(JD.clarity,'"'),',',JD.depth,',',JD.[table],',',JD.x,',',JD.y,',',JD.z,']')
FROM #jsonData JD
ORDER BY JD.carat ASC
FOR XML PATH(N''),TYPE).value('.','nvarchar(MAX)'),1,3,N'') + #CRLF +
N' ]' + #CRLF +
N'}';
DB<>Fiddle

SSRS Report Parameters passed out

I am currently building a number of logging and analysis tools to keep tabs on our SQL environment. We are currently using SQL Server 2014.
What I want to do is keep check of all the parameters that are passed to our reports during the day. All of the reports are currently using stored procedures so in my table or a select statement based on a table is output the stored procedure with the parameters for every time the report was run.
At the end of the day I would then like to be able to take the outputted statement and run it in SSMS without having to use the report. I have been looking at the ExceutionLogStorage table and the ExecutionLog view's and though it has most of the information that I need, the parameters are not in an easily usable state.
Has anyone done something similar to what I have described?
You need to add logging part in your original SP, for example:
Alter procedure a
(#parameter)
As
Begin
..
..
Insert into loggingTable(col)
Values(#parameter)
..
..
End
Then query directly against that loggingTable for getting the history of used parameters
A Google search around this topic quickly brought up the following blog post already identified by the OP as useful and shown below (this query itself is actually an expansion of work linked to by LONG's answer below)
SELECT TOP 1 ParValue
FROM (
SELECT els.TimeEnd
, IIF(CHARINDEX('&' + 'ParameterName' + '=', ParsString) = 0, 'ParameterName',
SUBSTRING(ParsString
, StartIndex
, CHARINDEX('&', ParsString, StartIndex) - StartIndex)) AS ParValue
FROM (SELECT ReportID, TimeEnd
, '&' + CONVERT(VARCHAR(MAX), Parameters) + '&' AS ParsString
, CHARINDEX('&' + 'ParameterName' + '=', '&' + CONVERT(VARCHAR(MAX), Parameters) + '&')
+ LEN('&' + 'ParameterName' + '=') AS StartIndex
FROM ExecutionLogStorage
WHERE UserName='UserName' -- e.g. DOMAIN\Joe_Smith
) AS els
INNER JOIN [Catalog] AS c ON c.ItemID = els.ReportID
WHERE c.Name = 'ReportName'
UNION ALL
SELECT CAST('2000-01-01' AS DateTime), 'ParameterName'
) i
ORDER BY TimeEnd DESC;
Both these approaches though really only give us a starting point since they (variously) rely upon us knowing in advance the report name and parameter names. Whilst we can quickly make a couple of changes to Ken Bowman's work to get it to run against all executions of all reports, we still have the problem that the query hardcodes the parameter name.
The parameters required to execute a report are stored on the Catalog table in the Parameter column. Although the column has a datatype ntext, it is actually storing an XML string. Meaning we can use an XPath query to get at the parameter names
with
CatalogData as (
select ItemID, [Path], [Name], cast(Parameter as xml) 'ParameterXml'
from Catalog
where [Type] = 2),
ReportParameters as (
select ItemID, [Path], [Name], ParameterXml, p.value('Name[1]', 'nvarchar(256)') 'ParameterName'
from CatalogData
cross apply ParameterXml.nodes('/Parameters/Parameter') as Parameters(p))
select *
from ReportParameters;
Executing this query will list all reports on the server and their parameters. Now we just need to combine this with Ken Bowman's query. I've gone with a CTE approach
with
CatalogData as (
select ItemID, [Path], [Name], cast(Parameter as xml) 'ParameterXml'
from Catalog
where [Type] = 2),
ReportParameters as (
select ItemID, [Path], [Name], p.value('Name[1]', 'nvarchar(256)') 'ParameterName'
from CatalogData
cross apply ParameterXml.nodes('/Parameters/Parameter') as Parameters(p))
select
els.TimeEnd
, c.[Name]
, rp.ParameterName
, iif(
charindex(
'&' + rp.ParameterName + '=', ParametersString) = 0
, rp.ParameterName, substring(ParametersString
, StartIndex, charindex('&', ParametersString, StartIndex) - StartIndex
)) 'ParameterValue'
from (
select
ReportID
, TimeEnd
, rp.ParameterName
, '&' + convert(varchar(max), Parameters) + '&' 'ParametersString'
, charindex(
'&' + rp.ParameterName + '=',
'&' + convert(varchar(max), Parameters) + '&'
) + len('&' + rp.ParameterName + '=') 'StartIndex'
from
ExecutionLogStorage
inner join ReportParameters rp on rp.ItemID = ReportID) AS els
inner join [Catalog] c on c.ItemID = els.ReportID
inner join ReportParameters rp on rp.ItemID = c.ItemID and rp.ParameterName = els.ParameterName;
Note that the parameter values are passed to the report as part of a URL, so you'll still need get rid the literal space encoding and so on. Also, this doesn't (yet...) work for multi-value parameters.

MySQL issue with NULL values

I have a table with fields: country_code, short_name, currency_unit, a2010, a2011, a2012, a2013, a2014, a2015. a2010-a2015 fields are type of double.
How do I make a query which orders the results by average of fields a2010-a2015, keeping in mind that these fields might have NULL value?
I tried this code and it did not work (returns a mistake, which tells there is something wrong in ORDER BY part. mistake was saying something about coumn names and GROUP BY). The logic is: ORDER BY ((A)/(B)) where A - sum of not NULL fields and B - count of not NULL fields.
Any ideas?
(if important, the code is going to be used in BigInsights environment)
SELECT country_code, short_name, currency_unit, a2010, a2011, a2012,
a2013, a2014, a2015
FROM my_schema.my_table
WHERE Indicator_Code = 'SE.PRM.TENR'
ORDER BY
(
(
Coalesce(a2010,0) + Coalesce(a2011,0) + Coalesce(a2012,0)
+Coalesce(a2013,0) + Coalesce(a2014,0) + Coalesce(a2015,0)
)
/
(
COUNT(Coalesce(a2010)) + COUNT(Coalesce(a2011)) + COUNT(Coalesce(a2012))
+ COUNT(Coalesce(a2013)) + COUNT(Coalesce(a2014)) +
COUNT(Coalesce(a2015))
)
) DESC;
use MySQL ifnull
IFNULL(expression_1,expression_2)
in your query :-
IFNULL(
(
COUNT(Coalesce(a2010)) + COUNT(Coalesce(a2011)) + COUNT(Coalesce(a2012))
+ COUNT(Coalesce(a2013)) + COUNT(Coalesce(a2014)) +
COUNT(Coalesce(a2015))
),
1
)

Computed Column with relationships

I have a table, MapLocation, which has a column and two relationships with tables that have a field that really need to be displayed as a single concatenated value. I was thinking this was a perfect case for a computed column, but not sure how to go about it.
MapLocation MaoNo Section
_____________________ _____________________ _____________________
MapNoId MapNoId SectionId
SectionId MapNumber (int) Section (int)
Identifier (nvarchar)
LocationName (nvarchar)
LocationName = "MapNUmber - SectionNumber - Identifier"
ex: 20 - 03 - SW4
How would I write that? I haven't done much with computed columns or concatenating in SQL.
Edit:
I need an actual computed column that is automatically updated, im looking for the formula. Or is this more of a function/trigger? Its possible, I certainly barely know what I'm doing. The idea is that I dont want to have to do two more server calls and concatenate these values client side.
You would use something like this to get the value:
select cast(n.MapNumber as nvarchar(10)) + ' - ' -- cast the MapNumber
+ cast(s.SectionId as nvarchar(10)) + ' - ' -- cast the SectionId
+ l.Identifier
from MapLocation l
left join MaoNo n
on l.MapNoId = n.MapNoId
left join Section s
on l.SectionId = s.SectionId
Then if you need to perform an UPDATE:
update l
set l.LocationName = (cast(n.MapNumber as nvarchar(10)) + ' - '
+ cast(s.SectionId as nvarchar(10)) + ' - '
+ l.Identifier)
from MapLocation l
left join MaoNo n
on l.MapNoId = n.MapNoId
left join Section s
on l.SectionId = s.SectionId
Edit #1 - you can use a TRIGGER:
CREATE TRIGGER trig_LocationName
ON MapLocation
AFTER INSERT
AS
Begin
update MapLocation
set LocationName = (cast(n.MapNumber as nvarchar(10)) + ' - '
+ cast(s.SectionId as nvarchar(10)) + ' - '
+ i.Identifier)
from Inserted i
left join MaoNo n
on i.MapNoId = n.MapNoId
left join Section s
on i.SectionId = s.SectionId
where MapLocation.MapNoId = i.MapNoId -- fields here to make sure you update the correct record
End

SQL Server Agent - Can Job Ask Information About Itself

I don't suppose anyone knows whether a SQL Server Agent Job can ask information about itself, such as its own ID, or the path it's running from? I'm aware of xp_sqlagent_enum_jobs and sp_help_job but this doesn't help, because you have to specify the job ID.
The idea is that we want code that we don't have to manage by being able to call a sproc which will identify the current job. Any ideas?
Yes, but it isn't pretty.
Look at the sys.sysprocesses (or dbo.sysprocesses in SQL 2000 and below). The program name will be SQL Agent something with a binary value at the end. That binary value is the binary vale of the guid of the job. So, substring out that value and do a lookup against the msdb.dbo.sysjobs table to find out what job it is (you'll need to cast the sysjobs.job_id to varbinary(100) to get the values to match).
I told you it wasn't pretty, but it will work.
nasty!!! but i think it might work...
eg. used within a job - select * from msdb..sysjobs where job_id = dbo.fn_currentJobId()
let me know.
create function dbo.fn_CurrentJobId()
returns uniqueidentifier
as
begin
declare #jobId uniqueidentifier
select #jobId = j.job_id
from master..sysprocesses s (nolock)
join msdb..sysjobs j (nolock)
on (j.job_id = SUBSTRING(s.program_name,38,2) + SUBSTRING(s.program_name,36,2) + SUBSTRING(s.program_name,34,2) + SUBSTRING(s.program_name,32,2) + '-' + SUBSTRING(s.program_name,42,2) + SUBSTRING(s.program_name,40,2) + '-' + SUBSTRING(s.program_name,46,2) + SUBSTRING(s.program_name,44,2) + '-' + SUBSTRING(s.program_name,48,4) + '-' + SUBSTRING(s.program_name,52,12) )
where s.spid = ##spid
return #jobId
end
go
thanks for the info though