How to make a xl release fail gracefully - exception

I have xl release to accomplish for . It is having multiple phases and each phase is containing multiple tasks. There is a templet(orchestor) responsible for multiple application deployment. I want the scenario to achieve where even if one of the application release is failed the rest application will continue to get deployed.This whole process is getting done by a groovy script.
So basically i want a graceful way to handle the task failure in groovy for the xl realese.
the code goes like this:
if(condition)
{
throw new Exception("Build Failed as the TAF sanity or TAF consumer failed")
}

You can introduce a boolean variable to the release, for example tafBuildSucceeded. Set the variable to false by default and true when the build succeeds.
After the tasks completes, you can use the release variable in preconditions of subsequent tasks.

Related

SqlDelight, MySql, and Flow: Flow's collect lambda not invoked on database changes

I'm playing around with SqlDelight using MySql with Hikary datasource in a plain JVM project using Kotlin and Flow to consume queries results.
The first execution of a query returns the data as expected, however the Flow#emit() is not triggered when the data on the database changes.
This one is the repository function I'm invoking:
fun getAllUser(): Flow<List<User>> = database.userQueries.getAll()
.asFlow()
.mapToList()
This is the main function where the repository function is invoked:
fun main() = runBlocking<Unit> { repository.getAllUser().collect {
println("Collecting users: $it")
}
}
I've tried debugging the underling code and the listener on the Query object is registered and the main is correctly blocked on the execution of the repository function.
Anyone else has experienced something similar?
So stupid, my bad! I was updating the database manually and expecting to see the Flow magically updating. Anyway, updating the database by means of SqlDelight functions causes the Flow to emit the updated recordset as expected.

How to stop a flink streaming job from program

I am trying to create a JUnit test for a Flink streaming job which writes data to a kafka topic and read data from the same kafka topic using FlinkKafkaProducer09 and FlinkKafkaConsumer09 respectively. I am passing a test data in the produce:
DataStream<String> stream = env.fromElements("tom", "jerry", "bill");
And checking whether same data is coming from the consumer as:
List<String> expected = Arrays.asList("tom", "jerry", "bill");
List<String> result = resultSink.getResult();
assertEquals(expected, result);
using TestListResultSink.
I am able to see the data coming from the consumer as expected by printing the stream. But could not get the Junit test result as the consumer will keep on running even after the message finished. So it did not come to test part.
Is thre any way in Flink or FlinkKafkaConsumer09 to stop the process or to run for specific time?
The underlying problem is that streaming programs are usually not finite and run indefinitely.
The best way, at least for the moment, is to insert a special control message into your stream which lets the source properly terminate (simply stop reading more data by leaving the reading loop). That way Flink will tell all down-stream operators that they can stop after they have consumed all data.
Alternatively, you can throw a special exception in your source (e.g. after some time) such that you can distinguish a "proper" termination from a failure case (by checking the error cause). Throwing an exception in the source will fail the program.
In your test you can start job execution in a separate thread, wait some time allowing it for data processing, cancel the thread (it will interrupt the job) and the make the assrtions.
CompletableFuture<Void> handle = CompletableFuture.runAsync(() -> {
try {
environment.execute(jobName);
} catch (Exception e) {
e.printStackTrace();
}
});
try {
handle.get(seconds, TimeUnit.SECONDS);
} catch (TimeoutException e) {
handle.cancel(true); // this will interrupt the job execution thread, cancel and close the job
}
// Make assertions here
Can you not use isEndOfStream override within the Deserializer to stop fetching from Kafka? If I read correctly, the flink/Kafka09Fetcher has the following code in its run method which breaks the event loop
if (deserializer.isEndOfStream(value)) {
// end of stream signaled
running = false;
break;
}
My thought was to use Till Rohrmann's idea of a control message in conjunction with this isEndOfStream method to tell the KafkaConsumer to stop reading.
Any reason that will not work? Or maybe some corner cases I'm overlooking?
https://github.com/apache/flink/blob/07de86559d64f375d4a2df46d320fc0f5791b562/flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java#L146
Following #TillRohrman
You can combine the special exception method and handle it in your unit test if you use an EmbeddedKafka instance, and then read off the EmbeddedKafka topic and assert the consumer values.
I found https://github.com/asmaier/mini-kafka/blob/master/src/test/java/de/am/KafkaProducerIT.java to be extremely useful in this regard.
The only problem is that you will lose the element that triggers the exception but you can always adjust your test data to account for that.

For Flex remote object calls (BlazeDS), is there a size limitation on objects returned?

I have a production mobile Flex app that uses RemoteObject calls for all data access, and it's working well, except for a new remote call I just added that only fails when running with a release build. The same call works fine when running on the device (iPhone) using debug build. When running with a release build, the result handler is never called (nor is the fault handler called). Viewing the BlazeDS logs in debug mode, the call is received and send back with data. I've narrowed it down to what seems to be a data size issue.
I have targeted one specific data call that returns in the String value a string length of 44kb, which fails (release build). When I do not populate the String value (in server side Java code) on the object (just set it empty string), the result handler is called, and the object is returned, again, using the release build. This works in a debug build.
The custom object being returned in the call is a very a simple object, with getters/setters for simple types boolean, int, String, and one org.23c.dom.Document type. This same object type is used on other other RemoteObject calls (different data) and works fine (release and debug builds). I originally was returning as a Document, but, just to make sure this wasn't the problem, changed the value to be returned to a String, just to rule out XML/Dom issues in serialization.
I don't understand 1) why the release build vs. debug build behavior is different for a RemoteObject call, 2) why the calls work in debug build when sending over a somewhat large (but, not unreasonable) amount of data in a String object, but not in release build.
I have't tried to find out exactly where the failure point in size is, but, not sure that's even relevant, since 44kb isn't an unreasonable size to expect.
By turning on the Debug mode in BlazeDS, I can see the object and it's attributes being serialized and everything looks good there. The calls are received and processed appropriately in BlazeDS for both debug and release build testing.
Anyone have an idea on other things to try to debug/resolve this?
Platform testing is BlazeDS 4, Flashbuilder 4.7, Websphere 8 server, iPhone (iOS 7.1.2). Tried using multiple Flex SDK's 4.12 to the latest 4.13, with no change in behavior.
Thanks!
After a week's worth of debugging, I found the issue.
The Java type returned from the call was defined as ArrayList. Changing it to List resolved the problem.
I'm not sure why ArrayList isn't a valid return type, I've been looking at the Adobe docs, and still can't see why this isn't valid. And, why it works in Debug mode and not in Release build is even stranger. Maybe someone can shed some light on the logic here to me.

Breakpoint in SSIS script task which is inside a ForEach Loop

I have an SSIS package which has got a foreach loop. inside the foreach loop I have a script task. I have put breakpoint in that script task, which gets hit but the problem is, it only gets hit on the first iteration. so if F10 or F5 it does not break again on the second iteration.
how can i make it break each time on the same point on each iteration.
It seems to be a expected behaviour of SSIS, as stated in Books Online:
"If a Script task is part of a Foreach Loop or For Loop container, the debugger ignores breakpoints in the Script task after the first iteration of the loop."
http://technet.microsoft.com/en-us/library/ms137625.aspx
You can try to work around it with the following alternatives:
Interrupt execution and display a modal message by using the MessageBox.Show method in the System.Windows.Forms namespace. (Remove this code after you complete the debugging process.)
Raise events for informational messages, warnings, and errors. The FireInformation, FireWarning, and FireError methods display the event description in the Visual Studio Output window. However, the FireProgress method, the Console.Write method, and Console.WriteLine method do not display any information in the Output window. Messages from the FireProgress event appear on the Progress tab of SSIS Designer. For more information, see Raising Events in the Script Component.
Log events or user-defined messages to enabled logging providers. For more information, see Logging in the Script Component.
http://technet.microsoft.com/en-us/library/ms136033.aspx
I know this is old question, but I have an idea like to share
As it's been answered by Guilherme, I can add something might be useful, if your foreach is based on a SQL query, you can add a ROW_NNUMBER() to it and assign it to a variable, inside the script task you can compare the value of this variable and break the task on any row you want.
if (Dts.Variables["Your_Variable"].Value.ToString() == "4") {
Console.WriteLine("Break");
}
At least you can stop iterating any place in the loop, rather than the first iteration.

Observable Command and Unit tests with Rx RTM

I have ported my code to the RTM version of both WinRT and Rx. I use ReactiveUI in my ViewModels. Before porting the code my unit tests were running without problem but now I got a strange behavior.
Here the test:
var sut = new MyViewModel();
myViewModel.MyCommand.Execute(null) //ReactiveAsyncCommand
Assert.AreEqaul(0, sut.Collection.Count)
If I debug the test step by step, the assertion is not failing, but using the test runner it's failing...
The Collection asserted is modified by a method subscribing to the command:
MyCommand.RegisterAsyncTask(_ => DoWork())
.ObserveOn(SynchronizationContext.Current)
.Subscribe(MethodModifyingCollection);
The code was working before moving it to the RTM. I tried also to remove the ObserveOn and add an await Task.Delay() before the Assert without success.
Steven's got the rightish answer, but there are a few RxUI specific things missing. This is definitely related to scheduling in a test runner, but the reason is that the WinRT version of ReactiveUI can't detect properly whether it's in a test runner at the moment.
The dumb workaround for now is to set this at the top of all your tests:
RxApp.DeferredScheduler = Scheduler.CurrentThread;
Do not use the TestScheduler for every test, it's overkill and actually isn't compatible with certain kinds of testing. TestScheduler is good for tests where you're simulating time passing.
Your problem is that MSTest unit tests have a default SynchronizationContext. So ObserveOn and ReactiveAsyncCommand will marshal to the thread pool instead of to the WPF context. This causes a race condition.
Your first and best option is the Rx TestScheduler.
Another option is to await some completion signal (and ensure your test method is async Task, not async void).
Otherwise, if you just need a SynchronizationContext, you can use AsyncContext from my AsyncEx library to execute the tests within your own SynchronizationContext.
Finally, if you have any code that directly uses Dispatcher instead of SynchronizationContext, you can use WpfContext from the Async CTP download.