Simplifying lengthy result after integration with Maple - integration

I want to integrate the following expression in Maple:
f := int(exp(-a*r^2)*exp(-b*r^2)*exp(-(r - R)^2*t^2)*r^2, r = 0 .. infinity):simplify(f, assume = positive);
The integration results in a very lengthy expression (as seen in the picture) which is too akward and unhandy for further integrations in maple.
result of integration
How can I further simplify the result? Or attain a simple result (at least without the error function).
Thanks for the help!

Related

pytest sqla: why query results are the same when calling the api several time with different parameters?

I know my problem will be difficult to explain but I hope there's an easy explanation to that.
I've made an api calling a path with a filter. I did write a test but when I launch my pytest calling this api several times the 1st result is always given as an answer. Let's make it clearer with some code.
If I call the api once on pytest:
http://myapipath
I will get 5 results
http://myapipath?cars=true
I will get 3 results
http://myapipath?cars=false
I will get 2 results
Now if call it twice in the same test with different parameters:
http://myapipath?cars=true # 3 results
http://myapipath?cars=false # 3 results (wrong, ie the ones I had with filter at true)
Is there anything that could explain such a behavior?

Post hoc tukey test for two way mixed model ANOVA

I and some of my students have searched for a solution to this in numerous places with no luck and literally for months. I keep on being referred to the lme command which I do NOT want to use. The output provided is not the one my colleagues or myself have used for over 15 years. Moreover given I am using R as a teaching tool, it does not flow as well following t-tests, and one-way anovas for intro stats students. I am conducting a two way RM ANOVA with one factor repetition. I have succeeded in getting R to replicate what Sigmaplot gives for the main effects. However the post hoc analysis given by R differs significantly from the same post hoc in Sigmaplot. Here is the code I used - with notes (as I am using this also to teach students).
#IV between: IVB1 - Independent variable - between subject factor
#IV within: IVW1 - Independent variable - within subject factor
#DV: DV - Dependent variable.
aov1= aov(DV ~ IVB1*IVW1 + Error(Subject/IVW1)+(IVB1), data=objectL)
summary(aov1)
# post hoc analysis
ph1=TukeyHSD(aov(DV ~ IVB1*IVW1, data=objectL))
ph1
I hope somebody can help.
Thank you!
I have also had this problem and I find convenient alternative with the aov_ez() function from the afex package instead of aov(), and then performed post hoc analysis using lsmeans() instead of TukeyHSD():
model <- aov_ez(data,
id="SubjID",
dv="DV",
within=c("IVW1", "IVW2"),
between = "IVB1")
# Post hoc
comp = lsmeans(model,specs = ~ IVB1: IVW1: IVW2, adjust = "tukey")
contrast(comp,method="pairwise")
You will find a detailed tutorial here:
https://www.psychologie.uni-heidelberg.de/ae/meth/team/mertens/blog/anova_in_r_made_easy.nb.html

How do you check a GUID against a list of known GUIDs in an SSRS expression?

I was writing an expression in SSRS/Visual Studio 2008, trying to compare a GUID to a list of known GUIDs... however, I was running up against errors in Visual Studio when I attempted that. Here is my code:
IIf(Fields!Id.Value = "E1A5AA02-6B0F-4D0D-87B6-E88773314B73" ...
It took a little digging, and eventually led me to this question to find the answer, but I used a combination of string conversion and casing to yield the result:
IIf(UCase(CType(Fields!Id.Value, GUID).ToString) = "E1A5AA02-6B0F-4D0D-87B6-E88773314B73" ...
For completeness, I probably should have wrapped UCase around both sides of the equation, just in case.

Complex Gremlin queries to output nodes/edges

I am trying to implement a query and graph visualisation framework that allows a user to enter a Gremlin query, returning a D3 graph of results. The D3 graph is built using a JSON - this is created using separate vertices and edges outputs from the Gremlin query. For simple queries such as:
g.V.filter{it.attr_a == "foo"}
this works fine. However, when I try to perform a more complicated query such as the following:
g.E.filter{it.attr_a == 'foo'}.groupBy{it.attr_b}{it.outV.value}.cap.next().findAll{k,e->e.size()<=3}
- Find all instances of *value*
- Grouped by unique *attr_b*
- Where *attr_a* = foo
- And *attr_b* is paired with no more than 2 other instances of *value*
Instead, the output is of the following form:
attr_b1: {value1, value2, value3}
attr_b2: {value4}
attr_b3: {value6, value7}
I would like to know if there is a way for Gremlin to output the results as a list of nodes and edges so I can display the results as a graph. I am aware that I could edit my D3 code to take in this new output but there are currently no restrictions to the type/complexity of the query, so the key/value pairs will no necessarily be the same every time.
Thanks.
You've hit what I consider one of the key problems with visualizing Gremlin results. They can be anything. Gremlin results might not just be a list of vertices and edges. There is no way to really control this that I can think of. At the end of the day, you can really only visualize results that match a pattern that D3 expects. I'd start by trying to detect that pattern and visualize only in those cases (simply display non-recognized patterns as JSON perhaps).
Thinking of your specific example that results like this:
attr_b1: {value1, value2, value3}
attr_b2: {value4}
attr_b3: {value6, value7}
What would you want D3 to visualize there? The vertices/edges that were traversed over to get that result? If so, you might be stuck. Gremlin doesn't give you a way to introspect the pipeline to see what's passing through it. In other words, unless the user explicitly gathers vertices and edges within the pipeline that were touched you won't have access to them. It would be nice to be able to "spy" on a pipeline in that way, but at the moment it doesn't do that. There's been internal discussion within TinkerPop to create a new kind of pipeline implementation that would help with that, but at the moment, it doesn't exist.
So, without the "spying" capability, I think your only workarounds would be to:
detect vertex/edge list on your client side and only render those with d3. this would force users to always write gremlin that returned data in such a format, if they wanted visualization. put it in the users hands.
perhaps supply server-side bindings for a list of vertices/edges that a user could explicitly side-effect their vertices/edges into if their results did not conform to those expected by your visualization engine. again, this would force users to write their gremlin appropriately for your needs if they want visualization.

SSRS Linear Regression Line for data in SSAS Cube

End Goal: Create a scatter plot with actual data (coming from SSAS Cube) and a best fit line using basic least-squares regression.
At the present, my MDX looks like this:
SELECT NONEMPTY({[Measures].[Invoice Total]}) ON COLUMNS,
NONEMPTY( { [Billed Date].[Date].ALLMEMBERS}) ON ROWS
FROM
(
SELECT NONEMPTY(StrToMember(#StartDate,CONSTRAINED):StrToMember(#EndDate,CONSTRAINED)) ON COLUMNS,
NONEMPTY( STRTOSET(#Requestor)) ON ROWS
FROM [Task Billing]
WHERE STRTOSET(#Project)
)
WHERE STRTOSET(#Division)
As you can see, there are a large number of parameters used to filter which data should be included in the regression. I was thinking of using LinToPoint but I cannot really figure it out, since I am so new to MDX.
I am TOTALLY open to workarounds.
Any ideas on how to accomplish this? Surely it is a common issue...
You're new to MDX....and I've forgotton all the advanced stuff I once knew! Not a great combination - sorry. All I can offer is the actual MDX I once used to show a trend line amongst real data points.
with
member [Measures].[X]
as 'Rank([Time], [Time].[Week].members)'
member [Measures].[Trend]
as 'LinRegPoint(X, [Time].[Week].members, [Measures].[Gross], X)'
select
{[Time].[Week].members} on rows,
{[Measures].[Gross], Trend} on columns
from [Sales]
If you can get a static example working on your cube, using the bare bones I give above, you can plug the #parameters in later. I hope that helps in a some way. Feel free to comment and I'll try to advise, but I am veeeeery rusty.