Alternative To Dynamic Query in SSRS - reporting-services

Need help guys.
I have a stored procedure which displays all of the records.
SELECT
Entity.Name as [ENTITY],
Product.Name AS [Product Name],
convert(date, whseintg.TrnDate) as TrnDate,
DOGR.AppNo,
DOGR.TrnNo,
DOGR.TrnType,
DOGR.StkId,
DOGR_D.ProdId,
DOGR_D.Qty,
DOGR_D.QtyIn,
DOGR_D.UPrice,
Ratio.Ratio
FROM Entity WITH ( NOLOCK ),
Product WITH ( NOLOCK ),
DOGR WITH ( NOLOCK ),
DOGR_D WITH ( NOLOCK ),
Ratio WITH ( NOLOCK ),
whseintg WITH (Nolock)
WHERE ( DOGR_D.ProdId = Product.ProdId ) and
( DOGR.TrnType = DOGR_D.TrnType ) and
( DOGR.AppNo = DOGR_D.AppNo ) and
( DOGR_D.RatioId = Ratio.Ratioid ) and
( DOGR.TrnType = whseintg.TrnType ) and
( DOGR.Appno = whseintg.TrnNo ) and
( DOGR.TrnNo is not null ) and
( ( dbo.DOGR.TrnType = 'SCR' ) ) and
( dbo.DOGR.LocID = dbo.Entity.LocID)
Now, I have certain parameters like #FromProductName and #ToProductName in the design view of the report.
I don't want to use dynamic queries because it will have a performance impact on the application. What I want is that if there is a value passed in both variables, the query would be something like this:
SELECT
Entity.Name as [ENTITY],
Product.Name AS [Product Name],
convert(date, whseintg.TrnDate) as TrnDate,
DOGR.AppNo,
DOGR.TrnNo,
DOGR.TrnType,
DOGR.StkId,
DOGR_D.ProdId,
DOGR_D.Qty,
DOGR_D.QtyIn,
DOGR_D.UPrice,
Ratio.Ratio
FROM Entity WITH ( NOLOCK ),
Product WITH ( NOLOCK ),
DOGR WITH ( NOLOCK ),
DOGR_D WITH ( NOLOCK ),
Ratio WITH ( NOLOCK ),
whseintg WITH (Nolock)
WHERE ( DOGR_D.ProdId = Product.ProdId ) and
( DOGR.TrnType = DOGR_D.TrnType ) and
( DOGR.AppNo = DOGR_D.AppNo ) and
( DOGR_D.RatioId = Ratio.Ratioid ) and
( DOGR.TrnType = whseintg.TrnType ) and
( DOGR.Appno = whseintg.TrnNo ) and
( DOGR.TrnNo is not null ) and
( ( dbo.DOGR.TrnType = 'SCR' ) ) and
( dbo.DOGR.LocID = dbo.Entity.LocID)
and (DOGR_D.ProdId between #FromProdID and #ToProdID)
Else, it will behave like the original query. Is that possible?

You could try rewriting your final condition:
and (DOGR_D.ProdId between #FromProdID and #ToProdID)
as
and DOGR_D.ProdId >= coalesce(#FromProdID, DOGR_D.ProdId)
and DOGR_D.ProdId <= coalesce(#ToProdID, DOGR_D.ProdId)

You can try using the case statement in the where clause. If both the parameters are not null.
AND(
CASE
When #FromProductID IS NOT NULL AND #ToProductID IS NOT NULL
THEN DOGR_D.ProdId
END Between #FromProductID AND #ToProductID
)
use the Above condition instead of
and (DOGR_D.ProdId between #FromProdID and #ToProdID)

Related

How to retrieve 500k records DB data faster?

I have two tables T1 1 000 records and T2 with 500 000 records. I have a query where I run a join between them and fetch data by performing some aggregations. My page seems to be loading slow. Are there any approaches to make this query faster?
I have created indexes on columns for which aggregations are being performed. I think it is a generic statement.
$query = Mymodel::selectRaw("supplier_data.name as distributor,supplier_data.name as name, supplier_data.group_id as group_id, supplier_data.pay,supplier_data.group_id as submitted_group_plan,supplier_data.group_id as group_id_string,
(SELECT sum(t.net_claim) AS trans_number
FROM transactions_data_new as t
JOIN `supplier_data` AS d ON `t`.`member_id` = `d`.`group_id`
WHERE
(
(
t.`submit_date`>= '$date_from' and t.`submit_date`<= '$date_to'
AND t.`member_id` = supplier_data.group_id
)
OR
(
(t.claim_status IS NULL)
AND
(t.submit_date is NULL)
)
)
AND d.id = supplier_data.id
) as trans_number,
(SELECT sum(t.claim) AS trans_number
FROM transactions_data_new as t
JOIN `supplier_data` AS d ON `t`.`member_id` = `d`.`group_id`
WHERE
(
(
t.`submit_date`>= '$date_from' and t.`submit_date`<= '$date_to'
AND t.`member_id` = supplier_data.group_id
)
OR
(
(t.claim_status IS NULL)
AND
(t.submit_date is NULL)
)
)
AND d.id = supplier_data.id
) as claim,
(SELECT sum(t.reversed) AS trans_number
FROM transactions_data_new as t
JOIN `supplier_data` AS d ON `t`.`member_id` = `d`.`group_id`
WHERE
(
(
t.`submit_date`>= '$date_from' and t.`submit_date`<= '$date_to'
AND t.`member_id` = supplier_data.group_id
)
OR
(
(t.claim_status IS NULL)
AND
(t.submit_date is NULL)
)
)
AND d.id = supplier_data.id
) as reversed,
(SELECT sum(t.reversal) AS trans_number
FROM transactions_data_new as t
JOIN `supplier_data` AS d ON `t`.`member_id` = `d`.`group_id`
WHERE
(
(
t.`submit_date`>= '$date_from' and t.`submit_date`<= '$date_to'
AND t.`member_id` = supplier_data.group_id
)
OR
(
(t.claim_status IS NULL)
AND
(t.submit_date is NULL)
)
)
AND d.id = supplier_data.id
) as reversal
");
I don't see the need of this too complex/repeated with same clauses and multiple sub selects for same table which can done using a single left join
SELECT
s.name AS distributor,
s.name AS name,
s.group_id AS group_id,
s.pay,
s.group_id AS submitted_group_plan,
s.group_id AS group_id_string,
SUM(t.net_claim) AS trans_number,
SUM(t.claim) AS claim,
SUM(t.reversed) reversed,
SUM(t.reversal) reversal
FROM
supplier_data s
LEFT JOIN transactions_data_new t
ON `t`.`member_id` = s.`group_id`
AND (
(
t.`submit_date` >= '$date_from'
AND t.`submit_date` <= '$date_to'
)
OR (
t.claim_status IS NULL
AND t.submit_date IS NULL
)
)
GROUP BY s.name,
s.group_id,
s.pay
As I understand it the chunk() method is for use when you need to work with a large dataset and take an action on that data chunk by chunk.
From your question, it sounds like you're performing a query then returning the data as JSON so to me, it doesn't sound like you're taking an action on your dataset that requires chunking.
If you want to break up the returned JSON data you should be instead looking at pagination.
You could apply pagination to your query like so:
$data = Inspector::latest('id')
->select('id', 'firstname', 'status', 'state', 'phone')
->where('firstname', 'LIKE', '%' . $searchtext . '%')
->paginate();
You can specify the size of each set by passing a number to the paginate method:
$data = Inspector::latest('id')
->select('id', 'firstname', 'status', 'state', 'phone')
->where('firstname', 'LIKE', '%' . $searchtext . '%')
->paginate(25);
If I've misunderstood and you did actually want to do the chunking, I believe you could do the following:
$data = Inspector::latest('id')
->select('id', 'firstname', 'status', 'state', 'phone')
->where('firstname', 'LIKE', '%' . $searchtext . '%')
->chunk(50, function($inspectors) {
foreach ($inspectors as $inspector) {
// apply some action to the chunked results here
}
});
Also, if you're returning an eloquent object it will be automatically cast to json so you don't need to perform json_encode() as far as I'm aware.

MySQL group by kills the query performance

I have MySQL query currently selecting and joining 13 tables and finally grouping ~60k rows. The query without grouping takes ~0ms but with grouping the query time increases to ~1.7sec. The field, which is used for grouping is primary field and is indexed. Where could be the issue?
I know group by without aggregate is considered invalid query and bad practise but I need distinct base table rows and can not use DISTINCT syntax.
The query itself looks like this:
SELECT `table_a`.*
FROM `table_a`
LEFT JOIN `table_b`
ON `table_b`.`invoice` = `table_a`.`id`
LEFT JOIN `table_c` AS `r1`
ON `r1`.`invoice_1` = `table_a`.`id`
LEFT JOIN `table_c` AS `r2`
ON `r2`.`invoice_2` = `table_a`.`id`
LEFT JOIN `table_a` AS `i1`
ON `i1`.`id` = `r1`.`invoice_2`
LEFT JOIN `table_a` AS `i2`
ON `i2`.`id` = `r2`.`invoice_1`
JOIN `table_d` AS `_u0`
ON `_u0`.`id` = 1
LEFT JOIN `table_e` AS `_ug0`
ON `_ug0`.`user` = `_u0`.`id`
JOIN `table_f` AS `_p0`
ON ( `_p0`.`enabled` = 1
AND ( ( `_p0`.`role` < 2
AND `_p0`.`who` IS NULL )
OR ( `_p0`.`role` = 2
AND ( `_p0`.`who` = '0'
OR `_p0`.`who` = `_u0`.`id` ) )
OR ( `_p0`.`role` = 3
AND ( `_p0`.`who` = '0'
OR `_p0`.`who` = `_ug0`.`group` ) ) ) )
AND ( `_p0`.`action` = '*'
OR `_p0`.`action` = 'read' )
AND ( `_p0`.`related_table` = '*'
OR `_p0`.`related_table` = 'table_name' )
JOIN `table_a` AS `_e0`
ON ( ( `_p0`.`related_id` = 0
OR `_p0`.`related_id` = `_e0`.`id`
OR `_p0`.`related_user` = `_e0`.`user`
OR `_p0`.`related_group` = `_e0`.`group` )
OR ( `_p0`.`role` = 0
AND `_e0`.`user` = `_u0`.`id` )
OR ( `_p0`.`role` = 1
AND `_e0`.`group` = `_ug0`.`group` ) )
AND `_e0`.`id` = `table_a`.`id`
JOIN `table_d` AS `_u1`
ON `_u1`.`id` = 1
LEFT JOIN `table_e` AS `_ug1`
ON `_ug1`.`user` = `_u1`.`id`
JOIN `table_f` AS `_p1`
ON ( `_p1`.`enabled` = 1
AND ( ( `_p1`.`role` < 2
AND `_p1`.`who` IS NULL )
OR ( `_p1`.`role` = 2
AND ( `_p1`.`who` = '0'
OR `_p1`.`who` = `_u1`.`id` ) )
OR ( `_p1`.`role` = 3
AND ( `_p1`.`who` = '0'
OR `_p1`.`who` = `_ug1`.`group` ) ) ) )
AND ( `_p1`.`action` = '*'
OR `_p1`.`action` = 'read' )
AND ( `_p1`.`related_table` = '*'
OR `_p1`.`related_table` = 'table_name' )
JOIN `table_g` AS `_e1`
ON ( ( `_p1`.`related_id` = 0
OR `_p1`.`related_id` = `_e1`.`id`
OR `_p1`.`related_user` = `_e1`.`user`
OR `_p1`.`related_group` = `_e1`.`group` )
OR ( `_p1`.`role` = 0
AND `_e1`.`user` = `_u1`.`id` )
OR ( `_p1`.`role` = 1
AND `_e1`.`group` = `_ug1`.`group` ) )
AND `_e1`.`id` = `table_a`.`company`
WHERE `table_a`.`date_deleted` IS NULL
AND `table_a`.`company` = 4
AND `table_a`.`type` = 1
AND `table_a`.`date_composed` >= '2016-05-04 14:43:55'
GROUP BY `table_a`.`id`
The ORs kill performance.
This composite index may help: INDEX(company, type, date_deleted, date_composed).
LEFT JOIN table_b ON table_b.invoice = table_a.id seems to do absolutely nothing other than slow down the processing. No fields of table_b are used or SELECTed. Since it is a LEFT join, it does not limit the output. Etc. Get rid if it, or justify it.
Ditto for other joins.
What happens with JOIN and GROUP BY: First, all the joins are performed; this explodes the number of rows in the intermediate 'table'. Then the GROUP BY implodes the set of rows.
One technique for avoiding this explode-implode sluggishness is to do
SELECT ...,
( SELECT ... ) AS ...,
...
instead of a JOIN or LEFT JOIN. However, that works only if there is zero or one row in the subquery. Usually this is beneficial when an aggregate (such as SUM) can be moved into the subquery.
For further discussion, please include SHOW CREATE TABLE.

How to combine dynamic fields and DB::raw with Laravel's fluent query builder?

This is my desired SQL:
SELECT
p.id , p.title , p.msrp , i.files
, p.cost
FROM
`products` p
LEFT JOIN (
SELECT
GROUP_CONCAT( `images`.`filename` SEPARATOR ',' ) AS files , `product_id`
FROM
`images`
GROUP BY
`product_id`
) i ON i.product_id = p.id
Here is my query using Laravel's fluent query builder:
$products = DB::table( 'products as p' )
->where( 'p.active' , '=' , 1 )
->select( $fields )
->get();
The $fields variable is an array of fields based on whether the user is logged in or not:
$fields = array( 'p.id' , 'p.title' , 'p.msrp' , 'i.files' ) ;
if( Auth::check() ) array_push( $fields , 'p.cost' ) ;
I am trying the LEFT JOIN using DB::raw:
->leftJoin(DB::raw( "( SELECT GROUP_CONCAT( images.filename SEPARATOR ',' ) , product_id FROM images GROUP BY product_id ) i" , "i.product_id", "=" , "p.id" )
For some reason this is not working... am I doing something wrong? I need to use DB::raw so that I can GROUP_CONCAT multiple rows in the images table into a delimited string.
On a tangent... I think sub-query select is not supported so if anyone has a better option, please let me know.

Query extremely slow

I am running the below query, but its extremeely slow. Dows anyone have any advice on how I can optimize it to improve performance.
The main user table only has 2700 row.
The query is:
SELECT
(
SELECT
core_org_chart.translation
FROM
core_org_chart
WHERE
(
core_org_chart.id_dir = t1.idOrg
)
) AS region,
(
SELECT
core_org_chart.translation
FROM
core_org_chart
WHERE
(
core_org_chart.id_dir = t2.idOrg
)
) AS level1,
(
SELECT
core_org_chart.translation
FROM
core_org_chart
WHERE
(
core_org_chart.id_dir = t3.idOrg
)
) AS level2,
(
SELECT
core_org_chart.translation
FROM
core_org_chart
WHERE
(
core_org_chart.id_dir = t4.idOrg
)
) AS level3,
core_user.firstname AS firstname,
core_user.lastname AS lastname,
core_user.email AS email,
core_user.register_date AS register_date,
core_user.lastenter AS lastenter,
(
SELECT
core_field_son.translation
FROM
(
core_field_son
JOIN core_field_userentry ON (
(
core_field_userentry.user_entry = core_field_son.idSon
)
)
)
WHERE
(
(
core_field_userentry.id_user = core_user.idst
)
AND (
core_field_userentry.id_common = 4
)
)
) AS Gender,
(
SELECT
core_field_son.translation
FROM
(
core_field_son
JOIN core_field_userentry ON (
(
core_field_userentry.user_entry = core_field_son.idSon
)
)
)
WHERE
(
(
core_field_userentry.id_user = core_user.idst
)
AND (
core_field_userentry.id_common = 6
)
)
) AS Race,
IF (
(core_user.valid = 1),
'Active',
'Suspended'
) AS UserStatus,
(
SELECT
jet_designations.designation
FROM
jet_designations
WHERE
(
jet_designations.id = core_user.designation
)
) AS UserDesignation,
(
SELECT
jet_designations.designation
FROM
jet_designations
WHERE
(
jet_designations.id = core_user.reports_to
)
) AS Manager
FROM
(
(
(
(
(
core_org_chart_tree t1
LEFT JOIN core_org_chart_tree t2 ON (
(
t2.idParent = t1.idOrg
)
)
)
LEFT JOIN core_org_chart_tree t3 ON (
(
t3.idParent = t2.idOrg
)
)
)
LEFT JOIN core_org_chart_tree t4 ON (
(
t4.idParent = t3.idOrg
)
)
)
JOIN core_group_members ON (
(
core_group_members.idst =
IF (
isnull(t2.idOrg),
t1.idst_ocd,
IF (
isnull(t3.idOrg),
t2.idst_ocd,
IF (
isnull(t4.idOrg),
t3.idst_ocd,
t4.idst_ocd
)
)
)
)
)
)
JOIN core_user ON (
(
core_user.idst = core_group_members.idstMember
)
)
)
WHERE
(t1.lev = 1)
you are using multiple queries in one, joining them will improve performance.

optimizing a union join inside select statement of other joins

I have a query I built in 3 -4 parts. This takes over 140secs to run once I add the union join with join. How can I change the union join to execute it faster.
SELECT
testing.CLIENTID,
testing.COMPANY,
testing.CONTACT,
testing.CONTACTID,
`orders`.`ORDERNO` AS `ORDERNO`,
`orders`.`BIDNO` AS `BIDNO`,
`projects`.`PROJID` AS `PROJID`,
`projects`.`PROJCODE` AS `PROJCODE`,
`projects`.`StartDate` AS `StartDate`,
`category`.`type` AS `CATEGORY`,
`projects`.`country` AS `COUNTRY`,
`projects`.`VALUE` AS `VALUE`,
`projects`.`PROCESSOR` AS `PROCESSOR`,
`projects`.`NES` AS `NES`,
`projects`.`SPECSALE` AS `SPECSALE`,
`projects`.`OFFICE` AS `OFFICE`,
`projects`.`LORM` AS `LORM`,
`lookupcountry`.`REGION` AS `REGION`
FROM
(
(
(
(
(
(
SELECT
contactmerge.CLIENTID,
contactmerge.CONTACT,
contactmerge.CONTACTID,
accountmerge.COMPANY
FROM
(
SELECT
`hdb`.`contacts`.`CONTACTID` AS `CONTACTID`,
`hdb`.`contacts`.`CLIENTID` AS `CLIENTID`,
concat(
`hdb`.`contacts`.`FIRSTNAME`,
" ",
`hdb`.`contacts`.`LASTNAME`
) AS CONTACT,
_utf8 'paradox' AS `SOURCEDATABASE`
FROM
`hdb`.`contacts`
UNION
SELECT
`sugarcrm`.`contacts`.`id` AS `CONTACTID`,
`sugarcrm`.`accounts_contacts`.`account_id` AS `CLIENTID`,
concat(
`sugarcrm`.`contacts`.`first_name`,
" ",
`sugarcrm`.`contacts`.`last_name`
) AS CONTACT,
_utf8 'sugar' AS `SOURCEDATABASE`
FROM
(
(
(
(
`sugarcrm`.`contacts`
LEFT JOIN `sugarcrm`.`email_addr_bean_rel` ON (
(
(
`sugarcrm`.`contacts`.`id` = `sugarcrm`.`email_addr_bean_rel`.`bean_id`
)
AND (
(
`sugarcrm`.`email_addr_bean_rel`.`primary_address` = 1
)
OR (
(
`sugarcrm`.`email_addr_bean_rel`.`primary_address` IS NOT NULL
)
AND (
`sugarcrm`.`email_addr_bean_rel`.`primary_address` <> 0
)
)
)
)
)
)
LEFT JOIN `sugarcrm`.`accounts_contacts` ON (
(
`sugarcrm`.`contacts`.`id` = `sugarcrm`.`accounts_contacts`.`contact_id`
)
)
)
JOIN `sugarcrm`.`email_addresses` ON (
(
`sugarcrm`.`email_addr_bean_rel`.`email_address_id` = `sugarcrm`.`email_addresses`.`id`
)
)
)
LEFT JOIN `sugarcrm`.`accounts` ON (
(
`sugarcrm`.`accounts`.`id` = `sugarcrm`.`accounts_contacts`.`account_id`
)
)
)
) AS contactmerge
LEFT JOIN (
SELECT
CLIENTID,
`hdb`.`clients`.`COMPANY` AS `COMPANY`
FROM
`hdb`.`clients`
UNION
SELECT
id AS CLIENTID,
`sugarcrm`.`accounts`.`name` AS `COMPANY`
FROM
`sugarcrm`.`accounts`
) AS accountmerge ON contactmerge.CLIENTID = accountmerge.CLIENTID
) AS testing
)
JOIN `orders` ON (
(
`testing`.`CONTACTID` = `orders`.`CONTACTID`
)
)
)
JOIN `projects` ON (
(
`orders`.`ORDERNO` = `projects`.`ORDERNO`
)
)
)
JOIN `category` ON (
(
`category`.`category_id` = `projects`.`category_id`
)
)
)
LEFT JOIN `lookupcountry` ON (
(
CONVERT (
`lookupcountry`.`COUNTRY` USING utf8
) = CONVERT (
`projects`.`country` USING utf8
)
)
)
)
ORDER BY
`testing`.`COMPANY`,
`projects`.`StartDate`
The table alias called testing is the one taking long to execute. I need to then turn this into a view
Original query without the joining of sugarcrm.
SELECT
`clients`.`CORPORATE` AS `CORPORATE`,
`clients`.`COMPANY` AS `COMPANY`,
`clients`.`CLIENTID` AS `CLIENTID`,
`contacts`.`CONTACTID` AS `CONTACTID`,
concat(
`contacts`.`LASTNAME`,
`contacts`.`FIRSTNAME`,
`contacts`.`INITIALS`
) AS `Contact`,
`orders`.`ORDERNO` AS `ORDERNO`,
`orders`.`BIDNO` AS `BIDNO`,
`projects`.`PROJID` AS `PROJID`,
`projects`.`PROJCODE` AS `PROJCODE`,
`projects`.`StartDate` AS `StartDate`,
`category`.`type` AS `CATEGORY`,
`projects`.`country` AS `COUNTRY`,
`projects`.`VALUE` AS `VALUE`,
`projects`.`PROCESSOR` AS `PROCESSOR`,
`projects`.`NES` AS `NES`,
`projects`.`SPECSALE` AS `SPECSALE`,
`projects`.`OFFICE` AS `OFFICE`,
`projects`.`LORM` AS `LORM`,
`lookupcountry`.`REGION` AS `REGION`
FROM
(
(
(
(
(
`clients`
JOIN `contacts` ON (
(
`clients`.`CLIENTID` = `contacts`.`CLIENTID`
)
)
)
JOIN `orders` ON (
(
`contacts`.`CONTACTID` = `orders`.`CONTACTID`
)
)
)
JOIN `projects` ON (
(
`orders`.`ORDERNO` = `projects`.`ORDERNO`
)
)
)
JOIN `category` ON (
(
`category`.`category_id` = `projects`.`category_id`
)
)
)
LEFT JOIN `lookupcountry` ON (
(
CONVERT (
`lookupcountry`.`COUNTRY` USING utf8
) = CONVERT (
`projects`.`country` USING utf8
)
)
)
)
ORDER BY
`clients`.`CORPORATE`,
`clients`.`COMPANY`,
`contacts`.`LASTNAME`,
`projects`.`StartDate`
Your LEFT JOIN from sugarcrm.contacts to sugarcrm.email_addr_bean_rel
ON the id=bean_id is ok, but then your test for Primary_Address = 1
OR ( primary address IS NOT NULL AND primary_address <> 0 ) is wasteful.
Not null mean it has a value. The first qualifier of 1 is ok, but then
you test for any address not equal to 0 (thus 1 is, but so is 2, 3, 400, 1809 or
any other number. So why not just take how I've simplified it.
SELECT
O.ORDERNO,
O.BIDNO,
CASE when c.ContactID IS NULL
then sc.id
ELSE c.contactid END as ContactID,
CASE when c.ContactID IS NULL
then sac.account_id
ELSE c.clientid END as ClientID,
CASE when c.ContactID IS NULL
then concat( sc.first_name, " ", sc.last_name )
ELSE concat( c.FIRSTNAME, " ", c.LASTNAME ) END as Contact,
CASE when c.ContactID IS NULL
then sCli.`name`
ELSE cCli.Company END as Company,
CASE when c.ContactID IS NULL
then _utf8 'sugar'
ELSE _utf8 'paradox' END as SOURCEDATABASE,
P.PROJID,
P.PROJCODE,
P.StartDate,
Cat.`type` AS CATEGORY,
P.`country` AS COUNTRY,
P.`VALUE` AS `VALUE`,
P.PROCESSOR,
P.NES,
P.SPECSALE,
P.OFFICE,
P.LORM,
LC.REGION
FROM
orders O
JOIN projects P
ON O.ORDERNO = P.ORDERNO
JOIN category Cat
ON P.category_id = Cat.category_id
LEFT JOIN lookupcountry LC
ON CONVERT( P.`country` USING utf8 ) = CONVERT( LC.COUNTRY USING utf8 )
LEFT JOIN hdb.contacts c
ON O.ContactID = c.ClientID
LEFT JOIN hdb.clients cCli
ON c.ClientID = cCli.ClientID
LEFT JOIN sugarcrm.contacts sc
ON O.ContactID = sc.id
LEFT JOIN sugarcrm.accounts sCli
ON sc.id = sCli.id
LEFT JOIN sugarcrm.accounts_contacts sac
ON sc.id = sac.contact_id
LEFT JOIN sugarcrm.accounts Acc
ON sac.account_id = Acc.id
LEFT JOIN sugarcrm.email_addr_bean_rel EABR
ON sc.id = EABR.bean_id
AND EABR.primary_address IS NOT NULL
LEFT JOIN sugarcrm.email_addresses EA
ON EABR.email_address_id = EA.id
ORDER BY
CASE when c.ContactID IS NULL
then sCli.`name`
ELSE cCli.Company END,
P.StartDate
I don't mind helping, but from now on, you should take a look at what I'm doing... Establish the relationships... Start with the basis of your data (orders) and look at ONE PATH on how to connect to your "contacts" table... Write those joins (as left-joins). THEN, write your paths to the SUGAR account contacts and write THOSE joins (also left-joins). Don't try to prequery all possible contacts, but using the CASE/WHEN to determine which to get based on a null route vs not just as I have with the contact, client, company, etc. You will get the data from one path vs the other... just keep it consistent.