I want to check whether the given input falls within the range of 1 to 100 (integer).
how to write JUnit test case for this task?
Assert.assertTrue(value >= 1 && value <= 100);
Related
How to convert my expressions in to an actual field without showing an error? I am trying to create the below into a field so I can use an expression which will count the "Intervenes"
=SUM(IIF(Fields!Actual_Duration.Value >= 10, "Intervene", "Leave"))
Any help would be appreciated!
If your Intervene and Leave are fields in your dataset, you would use the field values liek you did with Actual_Duration.
=SUM(IIF(Fields!Actual_Duration.Value >= 10, Fields!Intervene.Value, Fields!Leave.Value))
The will add the SUM the Intervene values when the Actual Duration is >= 10 along with the Leaves values when Actual Duration is < 10.
I have to compare two values changing in both directions (rising and falling) within an hour (items amount, for example). What formula may I use to calculate if the amount drops or rises on more than 20%? Does Zabbix calculated value support if conditions to support positive and negative change or it's possible to bypass it?
I'm trying to write something like this:
{api20prod:mysql.get_active_offers.count(0)}/{api20prod:mysql.get_active_offers.count(,,,1h)}*100 > 20
but what if mysql.get_active_offers.count(0) more than mysql.get_active_offers.count(,,,1h) ?
You cannot use a "Simple Change" preprocessor because:
If the current value is smaller than the previous value, Zabbix
discards that difference (stores nothing) and waits for another value.
If you set the item with a 1h check interval (or with scheduled intervals at a specific minute of every hour), you can do the trick with the last() function.
Let's say that at 12:00 your item equals 25 and at 13:00 it equals 38:
At 13:00 invoking last() without parameters will return 38
At 13:00 invoking last(#2) will return 25
You can calculate the hourly readings delta % with:
100 * ({api20prod:mysql.get_active_offers.last()} - {api20prod:mysql.get_active_offers.last(#2)}) / {api20prod:mysql.get_active_offers.last(#2)}
This syntax should work either in a trigger or in a calculated item, choose the one that suits you better: I suggest a calculated item.
Of course, your trigger will have a double condition: you need to match "> 20" OR "< -20"
If you don't want to change the item's check interval you can use the avg() function, see the documentation.
Scenario:
I have a lookup table which has a date column, I need to look at this date column and check if its today's date, if not then wait for 5 mins and check the same thing again, and if the date is current send an email and exit the loop, and if after 6 retries if the date is not current execute a SQL task.
I have a ForLoop Container with the following settings:
InitExpression : #[User::Counter] = 0
EvalExpression : #[User::Counter] < 6
AssignExpression : #[User::Counter] = #[User::Counter] + 1
How / Where do I check the date :
SELECT ControlTimeStamp from LOOKUPTABLE
WHERE ControlTimeStamp = DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE()))
Note:
I'm using Business Intelligence Development Studio (BIDS) 2008 for SSIS package development.
I think you'll want an approach like this. Execute your SQL Task to determine whether today is your date. From there you'll need to either sleep for N minutes or you'll want to send an email. The trick is to use an Expression on the Precedence Constraint between the Execute SQL Task the children.
My implementation differs slightly from what yours but the concept remains the same. I created two variables, #ActualDate and #ReferenceDate. #ActualDate is today and #ReferenceDate gets set from the Execute SQL Task. I then see whether they are equivalent. For what you have, if you get a result, then you know the send mail condition has been met so change your Expressions to meet that.
What isn't shown is how to terminate the loop early as I'm not quite certain how to do that.
I am trying to check if any of the column has changed. I have tried put all in on condition:
Taxes != (ISNULL(LookupTaxes) ? 0 : LookupTaxes) || Checksum != (ISNULL(LookupChecksum) ? 0 : LookupChecksum) || FeeIncome != (ISNULL(LookupFeeIncome) ? 0 : LookupFeeIncome) || CommissionReceived != (ISNULL(LookupCommissionReceived) ? 0 : LookupCommissionReceived) || CommissionPaid != (ISNULL(LookupCommissionPaid) ? 0 : LookupCommissionPaid) || Premium != (ISNULL(LookupPremium) ? 0 : LookupPremium)
but this is always returning FALSE although I have manually changed Taxes. If I put each condition separately than it works, for example:
Taxes != (ISNULL(LookupTaxes) ? 0 : LookupTaxes)
returns TRUE. If I use 6 conditions (instead of 1) for each column and output results into Union All is this method giving me what I need? My biggest concern is if rows will be duplicated. I have checked and looks like they are not but I wonder why some condition picks X records and another Y when I have changed both columns (related to these two conditions). For example, all Taxes and Premium columns are changed. In Conditional Split output "Taxes have changed" condition picks 1,000,000 rows, "Premium has changed" picks 100 rows. I know that this doesn't make any difference in my case because for me it's important that these rows are picked up for update but I am just confused about how this thing works.
I believe your logic is sound but I would suggest you split all those chained conditionals out into separate derived columns, especially if you'll have 1M rows flowing through it.
The first reason is performance. By splitting out operations into smaller pieces, the data flow engine can better take advantage of parallelism. Investigation: Can different combinations of components affect Dataflow performance?. Money quote from SQL CAT on the subject
Our testing has shown that if you want to change more than one column
in a data flow by using a Derived Column task, there is a performance
benefit to splitting that transformation into multiple Derived Column
tasks. In this note we will show that doing many smaller pieces of
work in multiple Derived Column tasks is significantly faster than
doing all the work in a more complex work in a single Derived Column
task.
The second reason is maintainability. Not only is the expression editor unfriendly and unforgiving, it makes it incredibly challenging to inspect intermediate values.
Demo
I put together a reproduction package that uses a script task to sends N rows down a data flow with the same value in all the columns as the row number. In the first data flow, I modify the values of Checksum and Premium as I load into a cache connection manager (to simulate lookup values differing). Even numbered rows should have the Checksum nulled out and every third row should have Premium nulled.
In this data flow, I used both your original expression (All in one check) as well as split it out into a check per condition.
As you can maybe see by the data viewer attached to the "bit bucket" task, the Changed post-fixed named columns only evaluate to True when there is a difference between the source and lookup value. (The row corresponding to 0 is accurate as (ISNULL(LookupTaxes) ? 0 : LookupTaxes) forces null values to be zero.
Were I you, at this point I'd replace the "bit bucket" transformation with a Conditional split
Output Name = UpdateRequired
Condition = [TaxesChanged] || [ChecksumChanged] || [FeeIncomeChanged]|| [CommissionReceivedChanged] || [CommissionPaidChanged] || [PremiumChanged]
If you continue to have issues, then you can put a data viewer on the pipeline to find the conditions that are not evaluating as expected.
An alternative is to use 2 Derived Columns Data Flow Transformations (DFT) prior to a Conditional Split DFT.
Derived Column DFT 1: Check each attribute setting a value of 1 if there is a change in the data and 0 if there is no change. For example, check the value of the inbound Data of Birth Column to the Database Date of Birth Column.
DerivedColumn1 = ((!ISNULL(InDOB) && !ISNULL(DbDOB) && InDOB != DbDOB) || (ISNULL(DbDOB) && !ISNULL(InDOB))) ? 1 : 0
DerivedColumn1 result is a signed integer value = 1 or 0.
DerivedColumn2: Sum the Derived Column values from DFT1.
IdentifiedChange = DerivedColumn1 + DerivedColumn2 + ....
Conditional Split DFT: Identifies if there is a change in the data as determined by the result of DFT2.
YesChange IdentifiedChange > 0
Hope this helps.
Actually the answer would be as follows
(Taxes != (ISNULL(LookupTaxes) ? 0 : LookupTaxes)) ||
(Checksum != (ISNULL(LookupChecksum) ? 0 : LookupChecksum)) ||
(FeeIncome != (ISNULL(LookupFeeIncome) ? 0 : LookupFeeIncome)) ||
(CommissionReceived != (ISNULL(LookupCommissionReceived) ? 0 : LookupCommissionReceived)) ||
(CommissionPaid != (ISNULL(LookupCommissionPaid) ? 0 : LookupCommissionPaid)) ||
(Premium != (ISNULL(LookupPremium) ? 0 : LookupPremium))
The extra "(" and ")" at the beginning and end of each variable checks is needed. The || (OR) condition will think that Taxes != the OR condition, as in the first part of your condition.
This will work.
I have a list of criteria in a database table that are entered by user. This criteria is in a format X > 5 for Segment A, X > 7 for segment B, and so on.
The data is collected using OLE DB Source where I specified a stored procedure to retrieve data. The record set has three columns: IdNumber, SegmentId and Total.
My conditional split should look like this:
SegmentId == 1 && Total > 5 (I would like to replace X with the actual value stored in Total column.
SegmentId == 1 && !(Total > 5)
... and so on.
So my question is, how can I use a condition which is string-based and stored in database in the Conditional Split Transformation Editor?
Regards,
Huske
You can't do this from the editor; you will need to add code (e.g. in a Script Task) to query the database, get the condition, build a complete expression and then set it in the Conditional Split programmatically.