Check for hung Office process when using Office Automation - language-agnostic

Is there a way to check to see if an Microsoft Office process (i.e. Word, Excel) has hung when using Office Automation? Additionally, if the process is hung, is there a way to terminate it?

Let me start off saying that I don't recommend doing this in a service on a server, but I'll do my best to answer the questions.
Running as a service makes it difficult to clean up. For example with what you have running as a service survive killing a hung word or excel. You may be in a position to have to kill the service. Will your service stop if word or excel is in this state.
One problem with trying to test if it is hung, is that your test could cause a new instance of word to startup and work, while the one that the service is running would still be hung.
The best way to determine if it's hung is to ask it to do what it is supposed to be doing and check for the results. I would need to know more about what it is actually doing.
Here are some commands to use in a batch file for cleaning up (both should be in the path):
sc stop servicename - stops service named servicename
sc start servicename - starts service named servicename
sc query servicename - Queries the status of servicename
taskkill /F /IM excel.exe - terminates all instances of excel.exe

I remember doing this a few years ago - so I'm talking Office XP or 2003 days, not 2007.
Obviously a better solution for automation these days is to use the new XML format that describes docx etc using the System.IO.Packaging namespace.
Back then, I used to notice that whenever MSWord had kicked the bucket and had had enough, a process called "Dr. Watson" was running on the machine. This was my first clue that Word had tripped and fallen over. Sometimes I might see more than one WINWORD.EXE, but my code just used to scan for the good Doctor. Once I saw that (in code), I killed all WINWORD.EXE processes the good Doctor himself, and restarted the process of torturing Word :-)
Hope that gives you some clues as to what to look for.
All the best,
Rob G
P.S. I might even be able to dig out the code in my archives if you don't come right!

I can answer the latter half; if you have a reference to the application object in your code, you can simply call "Quit" on it:
private Microsoft.Office.Interop.Excel.Application _excel;
// ... do some stuff ...
_excel.Quit();
For checking for a hung process, I'd guess you'd want to try to get some data from the application and see if you get results in a reasonable time frame (check in a timer or other thread or something). There's probably a better way though.

Related

how to get started with a project

I have to write a program that tests products fully automatic.
I still don't have the finished prototype board but i have an development board for the I.MX6UL processor
see picture below
http://www.nxp.com/products/sensors/gyroscopes/i.mx6ultralite-evaluation-kit:MCIMX6UL-EVK?
My first task is to put an uboot and linux file system on the board trought TCL code => asked by customer.
This all have to be done trough a usb connection that is connected to the development kit board.
NXP provide some tools that is called MFGTOOL2 => with this i can install a fully working linux but ofcourse i need to do this with code scripting and not via a tool because it's for production testing.
All this has to be installed on an nand flash ?
Your question is very vague and likely to not be useful to others as so much of it looks specific to your development environment. This means that the first step is going to be for you to do some research so that you can learn how to split the project up into smaller tasks that are more easily achievable. One of the key things is likely to be identifying a mechanism — any mechanism — for achieving what you want in terms of communication (and enumerating exactly what are the things that will be communicating!) Once you know what program to run, API to call or message to send, you can then think “How do I do that in Tcl?” but until you are clear what you are trying to do, you won't get good answers from anyone else. Indeed, right now your question is so vague that we could answer “write some programs” and have that be a precisely correct (but unhelpful) reply to you.
You might want to start by identifying in your head whether the program will be running on the board, or on the system that is connected to the board, or on a system that is remote from both of those and that will be doing more complex management from afar.

NullPointerExceptions while executing LoadTest on WSO2BPS

While performing loadtests on WSO2 BPS 3.2.0 we`ve ran onto the problem.
Let me tell you more about out project and our actions.
Our BPS process is designed to manage some interactions with 3 systems. Basically it is "spread" on two parts - first one to CREATE INSTANCE in one of systems, then waiting a bit, and then SELECT OFFER in instance context.
In real life it looks like: user wants to get a product, the application asks system for an offers and then the user selects offer from available ones.
IN BPS the first part is a straight-forward process, the second part is spread on two flows - one to refresh information with a new offers, and another is to wait if the user chooses one of them.
Our aim is to stand about 1000-1500 simulatious threads on the load-test. An external systems are simulated by mockups executed by LoadUI.
We can achieve our goal if we disable "Process-Level Monitoring Events" in deployment descriptor (set it to "none") of our process. Everything goes well and smooth for hours.
But if we enable this feature (and we need to), everything falls with an error very soon (on about 100-200 run):
[2015-07-28 17:47:02,573] ERROR {org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy} - Error processing response for MEX null
java.lang.NullPointerException
at org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy.onResponse(BPELProcessProxy.java:402)
at org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy.onAxisServiceInvoke(BPELProcessProxy.java:187)
at
[....Et cetera....]
After the first appearance of this error another one type appears - other threads just fall after the timeout.
It seems that database is ok (by the way, it is MySQL 5.6.25). The dashboard shows no extreme levels of input or output.
So I think the BPS itself makes a bottleneck. We have gave it 8gb heap and its conf options are set for extreme amounts of threads (if it possible negative values are set and if not - just ridiculously big like 100000).
Anyone has ever faced this problem? Appreciate any help very much.
Solved in BPS 3.5.0 version, refer to release-notes

Cant delete disks on Google Cloud: not in ready state

I have a "standard persistent disk" of size 10GB on Google Cloud using Ubutu 12.04. Whenever, I try to remove this, I encounter following error
The resource 'projects/XXX/zones/us-central1-f/disks/tahir-run-master-340fbaced6a5-d2' is not ready
Does anybody know about what's going on? How can I get rid of this disk?
This happened to me recently as well. I deleted an instance but the disk didn't get deleted (despite the auto-delete option being active). Any attempt to manually delete the disk resource via the dev console resulted in the mentioned error.
Additionally, the progress of the associated "Delete disk 'disk-name'" operation was stuck on 0%. (You can review the list of operations for your project by selecting compute -> compute engine -> operations from the navigation console).
I figured the disk-resource was "not ready" because it was locked by the stuck-operation, so I tried deleting the operation itself via the Google Compute Engine API (the dev console doesn't currently let you invoke the delete method on operation-resources). It goes without saying, trying to delete the operation proved to be impossible as well.
At the end of the day, I just waited for the problem to fix-itself. The following morning, I tried deleting the disk again, as it looks like the lock had been lifted in the meantime, as the operation was successful.
As for the cause of the problem, I'm still left clueless. It looks like the delete-operation was stuck for whatever reason (probably related to some issue or race-condition going on with the data-center's hardware/software infrastructure).
I think this probably isn't considered as a valid answer by SO's standards, but I felt like sharing my experience anyway, as I had a really hard time finding any info about this kind of google cloud engine problems.
If you happen to ever hit the same or similar issue, you can try waiting it out, as any stuck operation will (most likely) eventually be canceled after it has been in PENDING state for too long, releasing any locked resources in the process.
Alternatively, if you need to solve the issue ASAP (which is often the case if the issue is affecting any resource which is crtical to your production environment), you can try:
Contacting Google Support directly (only available to paid support customers)
Posting in the Google Compute Engine discussion group
Send an email to gc-team(at)google.com to report a Production issue
I believe your issue is the same as the one that was solved few days ago.
If your issue didn't happen after performing those steps, you can follow Andrea's suggestion or create a new issue.
Regards,
Adrián.

Mercurial MSSCCAPI Provider?

Does anyone know of an MSSCCAPI provider for Mercurial? I'd like to try out Kiln/Mercurial with PowerBuilder, but the PowerBuilder IDE only recognizes MSSCCAPI providers (which is not the same as MS SCC Package API) and the only one I can find is the original version of HgSccPackage.
I've contacted the developer and he has stated that he will not switch back from the Package to the regular API so that option leaves me with no upgrade path. This question was asked in July of 2010 with only the response that I have already mentioned. I'm hoping there's been something new since then. Thanks!
I don't have enough rep to post this as a comment to the question.
Maybe your PowerBuilder source control methodology is holding you back. Do you control the PB objects inside of the PBLs? Or do you control *.sr? (PB Export) files, maybe using PowerGen to synchronize them into the PBLs?
At work, a set of PBLs is published to programmers by the Build Manager at every build. This avoids having to pay one PowerGen license per programmer for them to sync up (which can take 5-40 minutes depending on the number of changes and the app size). You need just one license for the build server.
One caveat is that programmers need to understand the flow correctly so not to lose code or cause regressions, but a good inspection of the TortoiseHg diff window should allow to catch most problems before committing.

Change config values on a specific time

I just got a mail saying that I have to change a config value at 2009-09-01 (new taxes). Our normal approach for this would be to to awake at 2009-08-31 at 23:59 and then just change the value manually. Which not is a big problem since this don't happens to often. But it makes me wonder how other people handle issues like this.
So! How do you handle date specific config changes?
(We are working in asp.net but I don't think this has to be language specific)
Br
Carl Bergquist
I'd normally store this kind of data in a database table like this
Key, Value, EffectiveFrom, EffectiveTo
-----------------------------------------
VAT, 15.0, 20081201, 20091231
VAT, 17.5, 20100101, NULL
I'd then use the EffectiveFrom and EffectiveTo dates to chose the value that is effective at the given time. If the rate is open ended then the effecive to could either by NULL or 99991231.
This also allows you to go back without having to change the config. E.g. if someone asks you to recalculate the tax for the previous month before the rate change.
In linux, there is a command "at" for batch execution.
See "man at" for details.
To be honest, waking up near the time and changing it seems to be the simplest and cheapest approach. All of the technical solutions are fine, but it depends where you work.
In our environment it would be cheaper and simpler to get someone to wake up and make the change than to redevelop the functionality of a piece of software that already works. It certainly involves less testing, development overhead and costs which means we would tend to solve the problem as you do, manually.
That depends totally on the situation and the technology.
pjp's idea is good, if you get your config from a database, or as metadata to define the valid time for whole config sets/files.
Another might be: just prepare a new configfile with the new entries and swap them at midnight (probably with a restart of the service/program) whatever.
Swapping them would be possible with at (as given bei Neeraj) ...
If timing is a problem you should handle the change, or at least the timing of the change on the running server (to avoid time out of synch problems).
We got same kind of problem some time before and handled using the following approach.
this is suitable if you are well known to the source that orginates the configuration changes..
In our case, the source exposed a webservice (actualy a third party) which will return a modified config details. And there is a windows service running on our server which keeps on polling the webservice and will update the configuration file if there is any change.
this works perfectly in our case..
You can make use of this approach by changing the polling webservice part to your source of config change (say reading changes from some disk path). But am not sure how this is possible reading config changes from email.
Why not just make a shell script to swap out the files. run it in cron and switch the files out a minute before and send an alert text if NOT successful and an email if successful.
This is an example on a Linux box but I think you get the point and can do this on a Windows box.
Script:
cp /path/to/old/config /path/to/backup/dir/config.timestamp
cp /path/to/new/config
if(/path/to/new/config exsits) {
sendSuccessEmail();
} else {
sendPanicTextAlert();
}
cron:
59 23 31 8 * /path/to/script.sh
you could test this as well before hand just point to some dummy directories and file
I've seen the hybrid approach. Instead of actually changing the data model to include EffectiveDate/EndDate or manually changing the values yourself, schedule a script to change the values automatically. Also, be sure to have a solid test plan that will validate all changes.
However, this type of manual change can have a dramatic impact on reporting. If previous transactions join directly to the tables being changed, numbers in historical reports could change in a very bad way. There really is no "right" answer.
If I'm not able to do something like pjp's solution, I'd use either a scheduled task or a server job to update it automatically at the right time.
But...I'd probably still be awake checking it had worked.
Look the best solution would be to parameterise your config file and add things like when a certain entry should be used from. This would negate the need for any copying or swapping of files and your application would simply deal with it. (That goes for a config file approach or a database)
If you cannot change the current systems and you have to go with swapping the config files, then you also have two options:
Use a scheduled task to kick off a batch job or even a VBScript or PowerShell script (which ever you feel comfortable with) Make sure you set up the correct credentials to be able to do this at the middle of the night and you could also add some checking and mitigation into this approach.
Write a windows Service that does this for you. Here you have all the flexibility you need. Code it to do whatever it needs to do, do all the checks you need to (so that you can keep sleeping rather than making sure it actually worked) etc, etc. You service would then even take care of the scheduling aspect and all will be good. Here you could use xml DOM object and xPath and not replace the file, but simply update the specific entries as required.
Remember that any change to the config file would cause your site to restart, so make sure you take care of all the other housekeeping stuff that this could cause. (Although this would be exactly the same if you where sitting there in the middle of the night copying file around)