The progress ring widget that appears at the top right-hand corner of the page when running web-component-tester seems to show the total number of tests as always 3x the number of test suites, rather than the actual total of tests.
Is this a known issue, or this there something that I can do to get the correct total to show?
As an example, in this screenshot I have only one test in my suite, yet when the test is finished, the progress ring shows only 33% completion:
I can see in the code for the MultiReporter constructor that ESTIMATED_TESTS_PER_SUITE is set to 3 and is multiplied by the number of test suites in order to compute the total used by the Mocha HTML Reporter to render the progress widget. It appears that the onRunnerStart handler in MultiReporter is supposed to replace the estimate for the current suite with the actual total, but in my testing the runner argument passed to this handler is itself a MultiReporter object with an estimated total, so the updated total is still an estimate rather than the actual total.
Unfortunately, I haven't been able to figure out why the MultiReporter never computes the correct total, nor have I been able to find any hooks for explicitly specifying the total number of tests.
Related
I'm trying to learn policy gradient methods for reinforcement learning but I stuck at the score function part.
While searching for maximum or minimum points in a function, we take the derivative and set it to zero, then look for the points that holds this equation.
In policy gradient methods, we do it by taking the gradient of the expectation of trajectories and we get:
Objective function image
Here I could not get how this gradient of log policy shifts the distribution (through its parameters θ) to increase the scores of its samples mathematically? Don't we look for something that make this objective function's gradient zero as I explained above?
What you want to maximize is
J(theta) = int( p(tau;theta)*R(tau) )
The integral is over tau (the trajectory) and p(tau;theta) is its probability (i.e., of seeing the sequence state, action, next state, next action, ...), which depends on both the dynamics of the environment and the policy (parameterized by theta). Formally
p(tau;theta) = p(s_0)*pi(a_0|s_0;theta)*P(s_1|s_0,a_0)*pi(a_1|s_1;theta)*P(s_2|s_1,a_1)*...
where P(s'|s,a) is the transition probability given by the dynamics.
Since we cannot control the dynamics, only the policy, we optimize w.r.t. its parameters, and we do it by gradient ascent, meaning that we take the direction given by the gradient. The equation in your image comes from the log-trick df(x)/dx = f(x)*d(logf(x))/dx.
In our case f(x) is p(tau;theta) and we get your equation. Then since we have access only to a finite amount of data (our samples) we approximate the integral with an expectation.
Step after step, you will (ideally) reach a point where the gradient is 0, meaning that you reached a (local) optimum.
You can find a more detailed explanation here.
EDIT
Informally, you can think of learning the policy which increases the probability of seeing high return R(tau). Usually, R(tau) is the cumulative sum of the rewards. For each state-action pair (s,a) you therefore maximize the sum of the rewards you get from executing a in state s and following pi afterwards. Check this great summary for more details (Fig 1).
I have created 2 method invocation measures in Dynatrace for 2 method calls in the backend.
I want to create an Incident in Dynatrace if method 1 is called less than 80% of the times method 2 is called.
Is there a way to do this in Dynatrace?
When I open the dialog in Dynatrace to create an incident, I see we can add multiple measures in the condition. But I couldn't find a way to set the threshold for method invocation measure 1 to be 80% of the total number of calls for method invocation measure for a given timeframe.
I think the "rate measure" can do what you are looking for. I.e. with this you define one base measure and a fraction measure. The counts of these two compared to each other are the rate computed by that new measure.
See e.g. https://answers.dynatrace.com/storage/attachments/4622-capture.jpg and https://answers.dynatrace.com/spaces/148/uem-open-q-a_2/questions/186446/how-to-setup-a-rate-measure-correctly.html for sample configurations.
The documentation for both of these methods are both very generic wherever I look. I would like to know what exactly I'm looking at with the returned arrays I'm getting from each method.
For getByteTimeDomainData, what time period is covered with each pass? I believe most oscopes cover a 32 millisecond span for each pass. Is that what is covered here as well? For the actual element values themselves, the range seems to be 0 - 255. Is this equivalent to -1 - +1 volts?
For getByteFrequencyData the frequencies covered is based on the sampling rate, so each index is an actual frequency, but what about the actual element values themselves? Is there a dB range that is equivalent to the values returned in the returned array?
getByteTimeDomainData (and the newer getFloatTimeDomainData) return an array of the size you requested - its frequencyBinCount, which is calculated as half of the requested fftSize. That array is, of course, at the current sampleRate exposed on the AudioContext, so if it's the default 2048 fftSize, frequencyBinCount will be 1024, and if your device is running at 44.1kHz, that will equate to around 23ms of data.
The byte values do range between 0-255, and yes, that maps to -1 to +1, so 128 is zero. (It's not volts, but full-range unitless values.)
If you use getFloatFrequencyData, the values returned are in dB; if you use the Byte version, the values are mapped based on minDecibels/maxDecibels (see the minDecibels/maxDecibels description).
Mozilla 's documentation describes the difference between getFloatTimeDomainData and getFloatFrequencyData, which I summarize below. Mozilla docs reference the Web Audio
experiment ; the voice-change-o-matic. The voice-change-o-matic illustrates the conceptual difference to me (it only works in my Firefox browser; it does not work in my Chrome browser).
TimeDomain/getFloatTimeDomainData
TimeDomain functions are over some span of time.
We often visualize TimeDomain data using oscilloscopes.
In other words:
we visualize TimeDomain data with a line chart,
where the x-axis (aka the "original domain") is time
and the y axis is a measure of a signal (aka the "amplitude").
Change the voice-change-o-matic "visualizer setting" to Sinewave to
see getFloatTimeDomainData(...)
Frequency/getFloatFrequencyData
Frequency functions (GetByteFrequencyData) are at a point in time; i.e. right now; "the current frequency data"
We sometimes see these in mp3 players/ "winamp bargraph style" music players (aka "equalizer" visualizations).
In other words:
we visualize Frequency data with a bar graph
where the x-axis (aka "domain") are frequencies or frequency bands
and the y-axis is the strength of each frequency band
Change the voice-change-o-matic "visualizer setting" to Frequency bars to see getFloatFrequencyData(...)
Fourier Transform (aka Fast Fourier Transform/FFT)
Another way to think about "time domain vs frequency" is shown the diagram below, from Fast Fourier Transform wikipedia
getFloatTimeDomainData gives you the chart on on the top (x-axis is Time)
getFloatFrequencyData gives you the chart on the bottom (x-axis is Frequency)
a Fast Fourier Transform (FFT) converts the Time Domain data into Frequency data, in other words, FFT converts the first chart to the second chart.
cwilso has it backwards.
the time data array is the longer one (fftSize), and the frequency data array is the shorter one (half that, frequencyBinCount).
fftSize of 2048 at the usual sample rate of 44.1kHz means each sample has 1/44100 duration, you have 2048 samples at hand, and thus are covering a duration of 2048/44100 seconds, which 46 milliseconds, not 23 milliseconds. The frequencyBinCount is indeed 1024, but that refers to the frequency domain (as the name suggests), not the time domain, and it the computation 1024/44100, in this context, is about as meaningful as adding your birth date to the fftSize.
A little math illustrating what's happening: Fourier transform is a 'vector space isomorphism', that is, a mapping going bijectively (i.e., reversible) between 2 vector spaces of the same dimension; the 'time domain' and the 'frequency domain.' The vector space dimension we have here (in both cases) is fftSize.
So where does the 'half' come from? The frequency domain coefficients 'count double'. Either because they 'actually are' complex numbers, or because you have the 'sin' and the 'cos' flavor. Or, because you have a 'magnitude' and a 'phase', which you'll understand if you know how complex numbers work. (Those are 3 ways to say the same in a different jargon, so to speak.)
I don't know why the API only gives us half of the relevant numbers when it comes to frequency - I can only guess. And my guess is that those are the 'magnitude' numbers, and the 'phase' numbers are thrown out. The reason that this is my guess is that in applications, magnitude is far more important than phase. Still, I'm quite surprised that the API throws out information, and I'd be glad if some expert who actually knows (and isn't guessing) can confirm that it's indeed the magnitude. Or - even better (I love to learn) - correct me.
First, this is not a question about temperature iteration counts or automatically optimized scheduling. It's how the data magnitude relates to the scaling of the exponentiation.
I'm using the classic formula:
if(delta < 0 || exp(-delta/tK) > random()) { // new state }
The input to the exp function is negative because delta/tK is positive, so the exp result is always less then 1. The random function also returns a value in the 0 to 1 range.
My test data is in the range 1 to 20, and the delta values are below 20. I pick a start temperature equal to the initial computed temperature of the system and linearly ramp down to 1.
In order to get SA to work, I have to scale tK. The working version uses:
exp(-delta/(tK * .001)) > random()
So how does the magnitude of tK relate to the magnitude of delta? I found the scaling factor by trial and error, and I don't understand why it's needed. To my understanding, as long as delta > tK and the step size and number of iterations are reasonable, it should work. In my test case, if I leave out the extra scale the temperature of the system does not decrease.
The various online sources I've looked at say nothing about working with real data. Sometimes they include the Boltzmann constant as a scale, but since I'm not simulating a physical particle system that doesn't help. Examples (typically with pseudocode) use values like 100 or 1000000.
So what am I missing? Is scaling another value that I must set by trial and error? It's bugging me because I don't just want to get this test case running, I want to understand the algorithm, and magic constants mean I don't know what's going on.
Classical SA has 2 parameters: startingTemperate and cooldownSchedule (= what you call scaling).
Configuring 2+ parameters is annoying, so in OptaPlanner's implementation, I automatically calculate the cooldownSchedule based on the timeGradiant (which is a double going from 0.0 to 1.0 during the solver time). This works well. As a guideline for the startingTemperature, I use the maximum score diff of a single move. For more information, see the docs.
I have this trigger that fires upon a match of the rule below:
{monitoring:test.item.change(0)}<-100
When my graph goes down by over 100 units, an event gets created. The event should switch to OK status when the graph goes back up. The graph has different average values at different times of day and besides, the item is a trapper value, which does not support flexible intervals. My problem is this; when the graph falls by over 100 units, let's say from 300 to 10, a PROBLEM situation is created. At the next interval, if the value is still low (e.g 13), Zabbix creates an OK event, because although the value is still low, the expression does not return true because the graph hasn't gone down by a further 100 units. Any ideas on how I could fix this? I have been trying to use
{{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100}
but Zabbix wouldn't take that expression. This is supposed to compare the last value of test.item to the average value of the past 30 minutes and raise an alert when the difference exceeds 100.
This, I believe, would sort out my problem situation of a false OK status when the graph remains at a low value.
EDIT: I think I have cracked it. Zabbix has accepted the below expression:
{monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100
I think you'll soon realize that expression won't solve your targeted behavior and will keep on flapping between PROBLEM and OK.
You have just shifted the 'did a -100 change occurred' check between 'the last and previous last' values
to 'the last and the average of the last half an hour'.
Checking if either there was an abrupt change OR
if the value is still too low will probably better mimic your expected scenario,
{monitoring:test.item.last(0)}>100 | {monitoring:test.item.max(#2)}<20
max(#2)<20 checks if the maximum of the last 2 values is bellow 20.
EDIT: After reading your comment maybe this approach (after some tweaking for your expected values) will better serve you,
({monitoring:test.item.avg(1800)}<10 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>20) | ({monitoring:test.item.avg(1800)}>100 & {monitoring:test.item.avg(1800)}-{monitoring:test.item.last(0)}>100)
This way, you'll better fit your trigger for the different volumes during the day.