SSRS Group Summary - Multiple Records in Single Row - reporting-services

I need to reformat an SSRS document to summarize the same Item and Lot Numbers on one line instead of breaking them out on individual lines by the PKG #.
For example:
ITEM1234, LOT1234, PKG #'s 1 - 5, 8, 11, 16
The current format is as such:
ITEM1234 / LOT1234 / PKG1
ITEM1234 / LOT1234 / PKG2
ITEM1234 / LOT1234 / PKG3
ITEM1234 / LOT1234 / PKG4
ITEM1234 / LOT1234 / PKG5
ITEM1234 / LOT1234 / PKG8
ITEM1234 / LOT1234 / PKG11
ITEM1234 / LOT1234 / PKG16
Ideally, we would like to see the item and lot on one line, and a combination of all packages on only one line following, turning this 8 line combo in 2 lines.
ITEM1234 / LOT1234
PKG 1,2,3,4,5,8,11,16
Does anyone have an idea on how would we go about doing this?

You'll want a table to group by both your ITEM and LOT numbers and add a second line for the second line of data.
To concatenate the package numbers, you could use the LOOKUPSET function to get the data and the JOIN function to convert the multiple lines of data to a single string.
="PKG " &
Join(LookupSet(Fields!ITEM.Value & Fields!LOT.Value
, Fields!ITEM.Value & Fields!LOT.Value
, REPLACE(Fields!PACKAGE.Value, "PKG", "")
, "DataSet1"), ", ")
The REPLACE function is used to get rid of the PKG in front of each number.

Related

Unable to write the equation correctly as it keeps giving me the answer 0 which has to be around 0.4879602393

enter image description here
so I am new to C# and I have to write this equation but whatever I do, or however I try to break it down, I tried to just use different double names for some parts of the equation like (1 / 4) is S1 and then the rest as S2 S3 S4 S5 and add an Z1 that merges the whole equation together but I always end up with 0 so i guess the equation is just wrong but i cant see where my mistake is.
double c = 10 * (Math.PI / 180);
double Z2 = (1 / 4) - ((1 / 4) * Math.Sin(((5 / 2) * (Math.PI)) - (8 * c)));
In C#, integer division returns an integer, so 1 / 4 always returns the rounded value of zero. Try division with at least one operand a floating point value, e.g. 1.0 / 4.

How can I compute (exp(t) - 1)/t in a numerically stable way?

The expression (exp(t) - 1)/t converges to 1 as t tends to 0. However, when computed numerically, we get a different story:
In [19]: (exp(10**(-12)) - 1) * (10**12)
Out[19]: 1.000088900582341
In [20]: (exp(10**(-13)) - 1) * (10**13)
Out[20]: 0.9992007221626409
In [21]: (exp(10**(-14)) - 1) * (10**14)
Out[21]: 0.9992007221626409
In [22]: (exp(10**(-15)) - 1) * (10**15)
Out[22]: 1.1102230246251565
In [23]: (exp(10**(-16)) - 1) * (10**16)
Out[23]: 0.0
Is there some way I can compute this expression without encountering these problems? I've thought of using a power series but I'm wary of implementing this myself as I'm not sure of implementation details like how many terms to use.
If it's relevant, I'm using Python with scipy and numpy.
The discussion in the comments about tiny values seems pointless. If t is so tiny that it causes underflow, then the expression is 1 "since a long time". Indeed the Taylor development is
1 + t/2 + t²/6 + t³/24...
and as soon as t < 1 ulp, the floating-point representation is exactly 1.
Above that, expm1(t)/t will do a good job.

How to converting by using "Table to Structed Grid" on Paraview?

I tried converting a CSV file to Structured Grid.
Arrays in the file are X==size 16, Y==size 16, Z==size 24.
Unfortunately, defective result was returned. Many points are lacking, and volume rendering was failed.
But result of Filters/ Alphabetical/ Table To Points is no problem.
You have to make sure that your data is really a grid (that is to say that you have the coordinates of each point at the grid nodes), that you select the correct order for x,y,z (x should be the one the "changes fastest") and to set the extent correctly (from 0 to dimension-1)
For example, if your csv file should looks like:
X,Y,Z
0,0,0
1,0,0
0,1,0
1,1,0
0,0,1
1,0,1
0,1,1
1,1,1
you have to set the extent 0-1,0-1,0-1

How can i make it so "The " is skipped while listing by first letter in Sphinx query

I need to replicate results with Sphinx previously done with mysql.
I have single index built that holds 3 example fields:
artistname | songname | lyricstext
NOTE * need to clarify, the original multimillion tables come as mysql tables (fully emptied and imported anew), where they are imported into mysql first. Index source is single query linking these tables together.
by using sphinxql commands I need to achieve the next:
match and list by first letter from fields "artistname" and "songname" while ignoring "the " if found.
following these rules, listing by first letter "w" would include (examples):
Whitney Houston
The Who
The results need to be sorted by weight.
List by first letter "B" would yield result by weight:
B / Single letter / 100
B-T / Single letter + non-alphabet sign after / 90
B as Blue / Single letter + space after / 80
Baccara / First letter of single word / 70
Bad Religion / First letter of several words / 60
The B / not counting "The " / 50
The B.Y.Z / Single letter + non-alphabet sign after not counting "The " / 40
The B 2 B / Single letter + space after not counting "The " / 30
The Boyzz / First letter of single word not counting "The " / 20
10.The Blue Boy / First letter of several words not counting "The " / 10
===================================================
Full-Text Search can be done on any of the fields
What is our option to control the tolerance (the number of the possible misplaced/missing letters in a string) in the search?
example search for "beatle" would:
match "The Beatles"
would match "beatles"
===================================================
example search string "don"
Results by weight:
Don / weight 100 - first position single word - perfect match/
Don a tello / weight 90 -first position several words /
Mon Don Atel O / weight 80 - non-first position several words /
Monade Don / weight 70 - last position several words /
I know it looks much, but all this translates into several queries. I just don't have the needed level of expertise to produce it.
Last but not least, how should I build up the index with what options so I can use the full power of sphinx while working with these queries?

How to check Polynomial Regression result in RapidMiner?

I use RapidMiner and i have a data set which contains 40 lines, each line has 14 column.
Lines are different kinds of metrics of Android applications + and the end of the line there is google-play ranking (first line is the header which contains the name of metrics).
(So the goal is predict google play ranking from metrics.)
The data set: http://pastebin.com/Cw1BR4K6
column 1-13: different kind of metrics
column 14: google play ranking
line 2-40: metrics of Android projects
I used PolynomialRegression in RapidMiner and i got this result:
- 6.723 * lloc ^ 1.000
+ 1.187 * nid ^ 2.000
- 47.730 * nle ^ 1.000
- 36.433 * nel ^ 1.000
- 1.466 * nip ^ 2.000
- 97.187 * activites ^ 1.000
- 50.080 * inside-permissions ^ 1.000
- 60.291 * outside-permissions ^ 1.000
- 52.472 * all-permissions ^ 4.000
- 2.309 * jtlloc ^ 1.000
+ 36.058 * jtnm ^ 1.000
+ 9.924 * jtna ^ 1.000
+ 40.504 * jtncl ^ 1.000
+ 9.455
My question:
How can i check that this result is correct?
How can i check this result to an already available line?
For example i would like to apply this result to the line 25: 25,8,5,10,0,1,0,0,0,239,10,14,4,3.8
My other question:
What are the methods which i can do predicts about this set?
And what is the best methods to do it? I would like ask you to explain it to me, if it is possible.
Thanks in advance, Peter
The result of the polynomial regression is a trained model. If you want to apply the model to a data set and see the results, use the Apply Model operator. It takes two inputs: the model and the data. The output of this operator is dataset with one more attribute: the regression result.
But evaluating performance of a model using the same data as it was trained on is a very bad idea.(overfitting). To correctly evaluate the model's performance, split the data to training set(used for training the model) and testing set(used to evaluate performance). Or use cross-validation which is in fact the same, but done multiple times and averaged. (in Rapidminer : Edit -> New Building Block -> Numerical X-Validation)
Which regression method to choose is a difficult problem and depends on your specific needs. Is your only criterion the regression error? Do you need human readable output?
You will surely need to experiment with multiple methods. And I'm not sure you will get some conclusive results with this small dataset.