When processing a SSAS cube, how can I gain visibility into progress? - ssis

When I issue the ProcessFull command, I would like to know the following:
What is the current dimension being processed
How many more dimensions will need to be processed before the ProcessFull command completes
What APIs can I use to build my own progress bar?

The way that SSMS and BIDS do this is by listing to trace events (the one that you can see using SQL Profiler).
So you could use AMO to get a list of all the dimensions, cubes, partitions, etc in your database, then when the processing starts you can capture the trace events.
You can actually do this all via AMO if you use the SessionTrace object and attach event handlers you can listen to the events that relate to methods called within that AMO session.

Related

How to exclude methods from code profiling

I am executing code profiling with dotTrace, and I would like to be able to exclude specific methods from the code profiling - namely the ones that call external services and whose performance I do not control.
Is there a way to do this? I am trying to filter my results using dotTrace subsystems, but I am not being able to "hide" these method calls from the profiling results.
Thanks in advance
To exclude some method from call tree, you can just press Del or Shift+Del on it.
See https://www.jetbrains.com/help/profiler/Studying_Profiling_Results__Performance_Forecasting.html for Sampling/Tracing/Line-by-line and https://www.jetbrains.com/help/profiler/Forecasting_Performance_Timeline.html for Timeline profiling mode.
If you are using Line-by-line profiling mode, you can profile only particular methods using filters: https://www.jetbrains.com/help/profiler/Profiler_Options.html#filters

when will SetUnhandledExceptionFilter not work? e.g. stack corruptions?

I would like to have my code create a dump for unhandled exceptions.
I'd thought of using the SetUnhandledExceptionFilter. But what are the cases when SetUnhandledExceptionFilter may not work as expected. For example what about stack corruption issues when, for instance, a buffer overrun occurs on stack?
what will happen in this case? are there any additional solutions which will always work?
I've been using SetUnhandledExceptionFilter for quite a while and have not noticed any crashes/problems that are not trapped correctly. And, if an exception is not handled somewhere in the code, it should get handled by the filter. From MSDN regarding the filter...
After calling this function, if an exception occurs in a process that
is not being debugged, and the exception makes it to the unhandled
exception filter, that filter will call the exception filter function
specified by the lpTopLevelExceptionFilter parameter.
There is no mention that the above applies to only certain types of exceptions.
I don't use the filter to create a dump file because the application uses the Microsoft WER system to report crashes. Rather, the filter is used to provide an opportunity to collect two additional files to attach to the crash report (and dump file) that Microsoft will collect.
Here's an example of Microsoft's crash report dashboard for the application with module names redacted.
You'll see that there's a wide range of crash types collected, including, stack buffer overrun.
Also make sure no other code calls the SetUnhandledExceptionFilter() after you set it to your handler.
I had a similar issue and in my case it was caused by another linked library (ImageMagick) which called SetUnhandledExceptionFilter() from its Magick::InitializeMagick() which was called just in some situations in our application. Then it replaced our handler with ImageMagick's handler.
I found it by setting a breakpoint on SetUnhandledExceptionFilter() in gdb and checked the backtrace.

ClojureScript, Om and Core.async: How to handle events properly

I have had a look at using Om for rich client website design. This also is my first time using core.async. Reading the tutorial https://github.com/swannodette/om/wiki/Basic-Tutorial I have seen the usage of a core.async channel to handle the delete operation (as opposed to doing all the work in the handler). I was under the impression that using that channel for deletion was merely done because the delete callback was declared in a scope where you have a cursor on an item-level where you actually want to manipulate the list containing that item.
To get more insights into channels I have seen Rich Hickey's talk http://www.infoq.com/presentations/clojure-core-async where he explains how its a good idea to use channels to get application logic out of event-callbacks. This made me wonder whether the actual purpose of the delete channel in the tutorial was to show that way of structuring an application. If so,
what are best practices associated with that pattern?
Should one create individual channels for all kinds of events? I.e. If I add a controller to create a new event, would I also create a new channel for object creations that is then used to get objects to be added to the global state at another place in the application?
Lets say I have a list of items, and one items has a detailed/concise state flag. If detailed? is true it will display more information, if detailed? is false it will display fewer information. I have associated a on-click event that uses om/transact! on the cursor (being a view to the list item within the global state object).
(let [toggle-detail-handler
(fn [e]
(om/transact! (get-in myitem [:state])
#(conj % {:detailed? (not (:detailed? %))})))]
(html [:li {:on-click toggle-detail-handler}
"..." ]))
I realize that this might be a very succinct snippet where the overall benefit of using channels as a means to decouple the callback event from the acutal logic changes does at first not seem worth the effort but the overall benefits with more complex examples outweigh this. But on the other hand introducing an extra channel for such detail-not-detailed toggling seems to add a fair amount of load to the source code as well.
It would be great if you could give some hints/tips or other thoughts on the whole design issue and put them into perspective. I feel a little lost there.
I use channels to communicate between components that cannot communicate through cursors.
For example, I use channels when:
the communicating components do not share app state (eg, their cursors are pointing down different branches of a hierarchical data structure)
the changes being communicated live outside of the app state (for example, component A wants to change component B's local state AND component B is not a child of A (otherwise this can be done by passing :state to om/build)
I want to communicate with something outside of the Om component tree
Note that I like to keep the "domain state" in the app state atom and the GUI state in the component local state. That is, app state is what is being rendered and local state is how. (where "how" also refers to which part) For example, if you are writing a text editor, the app state is the document being edited and the local state is what page is being edited, whether bold is selected so forth.
In general, I use a single communication channel onto which I place [topic value] pairs. I then use pub and sub to route the messages. Eg, (def p (async/pub ch first)) to use the topic to dispatch events and (om/sub p my-ch :foo) to receive messages with topic :foo to my-ch. I generally store this single communication channel in Om's shared state.
Sometimes I will use multiple channels, but I would do this to set up specific pipelines or workflows, rather than for general purpose messaging. For example, if I have a pipeline of processing components doing stuff to a stream of data, then I might set this up as a chain of channels with the endpoints connected into my Om application. For general UI development, this is rare.
I'm also playing around with a Qt-esque signals/slots system for my Om components and I'm still experimenting with using a shared signals channels vs having each signal be its own channel. I'm as yet undecided which approach is better.

What is an efficient way for logging in an existing system

I have the following in my system:
4 File folders
5 Applications that do some processing on files in the folders and then move files to the next folder (processing: read files, update db..)
The process is defined by Stages: 1,2,3,4,5.
As the files are moved along, the Stage field within them is updated to the next Stage.
Sometimes there are exceptions in the system, not necessarily exception in code but exception in the process.
For instance, there is an error in transmitting the file to the next folder. In this case the stage is not updated and an record is written in the DB for this file.
What I want to do, what is the best approach?
I want to plug a utility of some sort or add code to the applications that will capture any exceptions in the process. Like if a file was not moved, I want to know what stage and why. This will help in figuring out the break down in the process.
I need something that will provide the overall health of the process.
Now sure how to go about doing this from an architectural point of view.
The scheduler? Well that might knock the idea out anyway.
Exit code is still up and running from dos days.
it's a property of the Application Class (0 the default) is success
So from your app you'd detect an error and set ApplicationExitCode to some meaning number like 1703 (boo hoo)
Application.ShutDown(1703);// is the .net4 way
However seeing as presumably the scheduler is just running the app, you'd have to script it all up. Might as well just write a common logging dll and add it to each app as mess about with that, especially if you want the same behaviour if it's run from outside the scheduler.
Another option would be delegating. ie you write an app that runs the app (passed in as a command line parameter) and logs the result (via exit code for instance) and then change scheduler items to call that with the requisite parameter.

AS3 Error #1502

AS3
Error: Error #1502: A script has executed for longer than the default timeout period of 15 seconds.
Is there a way to temporarily suppress this on a specific block of code?
I am creating a HUGE dynamic 3d array of objects, 1000x1000x1000 and need the build to actually finish the initializing.
Your best bet would be to try and refactor your code. Perhaps you can make use of this tutorial which deals with the exact problem you are having.
http://www.senocular.com/flash/tutorials/asyncoperations/
Increasing the timeout is one option, however I would also suggest considering an approach that would build your arrays over multiple frames, that is splitting the work up into separate jobs. As long as you give control back to the Flash Player every once in a while, you will not get this exception.
I'm not certain of the specifics of your problem, however you will need to find a way to parallelize or just simply segment your calculations. If your algorithm centers around one major loop, then consider creating a function that takes all of the arguments necessary to record the context of a single iteration. Then, create a simple control loop that will call this function and determine when to wait until the next frame and when not to. Leveraging AS3 closures can also help with this.
Look for the script execution time limit in the "Publish Settings" (Flash). If you're using Flex, maybe this one can be useful: http://livedocs.adobe.com/flex/3/html/help.html?content=compilers_14.html (check default-script-limits, max-recursion-depth, max-execution-time). Oh! It seems there's apparently no way to make it behave in a different way on a specific piece of code (it is a global setting).
I do not approve the increse timeout option. Because for all this time your appllication is just hangs the whole Flash player. And normaly user thinks it is down, and forses it to quit.
check this one out: How to show the current progressBar value of process within a loop in flex-as3?
And then you can even show the progress which would be really more confident for you and for user.