I would like to create an array job where the tasks don't all execute at the same time. The tasks will be enabled by some future command. The reason that I need this feature is that I want to aggregate what would otherwise be many related jobs but the data needed for all the jobs isn't available at the same time.
I thought that I could use qalter but it doesn't allow changing options by task. It seems that I can't even adjust the number of tasks in a job. The only remaining option that I have is to let all the tasks issue and sleep until the data is available but I don't like that solution as it results in wasted slots.
How can I get the behavior that I want, whereby an array job is created for a set of related tasks but the tasks issue in a controllable way?
Use qalter command with -hard option and queue name..
Related
Is it possible to materialize the edits made as part of a scenario into a dataset in foundry?
I want for each scenario to write out the primary keys of the objects edited as part of the scenario.
The motivation is that I need to run multiple processes to compute metrics as part of the changed values for each scenario, at a scale and runtime that is not possible to do with Functions.
Edit with details:
The thing is that I am not doing actual edits to the objects for the object type, I don't want to apply it.
I tested out the "Action Log" and it does not seem like this picks up "uncommitted" actions, meaning actions that is just run as part of a scenario. Also, it does not seem to be a link to the scenario it was a part of, even if the changes were committed.
The workflow is that I have Object Type A, and I define multiple scenarios S on a subset of the objects in A.
Each scenario might make something like 50k edits to a subset of A, through multiple Actions backed by a Function.
I save some of the scenarios. Now I am able to load these scenarios and "apply" them on A again in Workshop.
However I need to be able to get all the primary keys, and the edited values of A materialized into a dataset (for each scenario), as I need to run some transformation logic to compute a metric for the change as part of each scenario (at a scale and execution time not possible in Functions).
The Action Log did not seem to help a lot for this. How do I get the "edits" as part of a saved scenario into a dataset?
The only logic you can run BEFORE applying will be functions.
Not sure about your exact logic but Function's Custom Aggregations can be very powerful: Docs here
this might not directly let you calculate the Diff but you could use the scenario compare widgets in workshop to compare your aggregation between multiple Scenarios
e.g. you have a function that sums(total profit)
Your Workshop could show:
Current Data:
$10k
Scenario A:
$5k
Scneario B:
$13k
instead of like:
Scenario A:
-$5k
Scenario B:
+$3k
Afaik there's no first class way of doing this (yet).
"Applying" a scenario basically means you're submitting the actions queued on the scenario to the ontology. So neither the actions nor the ontology are aware that they came from a scenario.
What I've been doing to achieve what you're working on is using the "Action Log". It's still in Beta so you might need to ask for it to be enabled. It will allow you on each action to define a "log" object to be created that can track the pks of your edited objects per Action.
How I do the action log is:
My Action log has the "timestamp" of the action when they were run.
My Scenarios have the "timestamp" of when it was applied.
Since "Applying a Scenario" means -> actually running all actions on Ontology (underlying data) this gets me this structure if I sort everything by timestamp:
Action 1
Action 2
Scenario A applied
Action 3
Action 4
Scenario B applied
this allows you to do a mapping later on of Action 1/2 must come from Scenario A and Action 3+4 come from Scenario B.
EDIT: Apparently you might be able to use the Scenario RID directly in the Action Logs too (which is a recent addition I haven't adopted yet)
This won't allow you tho to compute (in transforms...) anything BEFORE applying a scenario tho
I have a work order in Maximo that has tasks.
I want to configure Maximo so that the work order cannot be changed to complete if any of the tasks are not complete.
Reason: I want to do this to ensure that none of the tasks are accidently missed when the work order is changed to complete.
How can I do this?
My consultant has suggested that this can only be done with Java customization of Maximo. I would like to verify if this is the only option.
Version: 7.6.1.1
From the screen shot, it looks like you're on Maximo 7.6.1. So, Java is certainly not the only option.
One of the ways to do this without any "coding" (other than a Conditional Expression, which doesn't count) is to put a Conditional Expression on synonyms of Complete in the WOSTATUS Synonym Domain. This solution will prevent synonyms of Complete from showing up in the list of statuses you can choose from unless the condition evaluates to true.
If you want to use Automation Scripts, you could call one from an Object Launch Point or from an Attribute Launch Point and have it throw an error if a situation like the screenshot would result.
Alternatively to the above, you could choose to have Tasks inherit status changes from the parent automatically, in which case the Tasks in your screenshot would have changed to COMP when the WO they are under changed to COMP. You can configure "Inherit status changes" to be true by default and configure whether users can override that default.
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM+Maximo+Asset+Management/page/Restricting+closure+of+parent+work+orders
Restricting closure of parent work orders when child or task work
orders are in process You can restrict users from closing a parent
work order if any child or task work order is not closed, completed,
or canceled. You create a conditional expression and apply it to the
WOSTATUS domain closed, canceled, and complete values.
If all tasks must be completed and you don't care about parent/child work orders, then you can use the following condition instead:
not exists (select 1 from workorder where parent = :wonum
and istask=0 and status not in ('COMP','CLOSE','CAN'))
Word of warning. Test properly. If you require all tasks be completed, it may affect escalations and technicians may dislike having to check all tasks complete on routine job plans.
In my current project I need to check if a process Instance already exist. I should be able to check this against a value which is stored in the pipeline.
In WmMonitor I tried a few services but with none I was able to find the existing process.
Do I need to mark the pipeline variable as logged field to be able to find the process instance?
Mayeb someone can share an example?
Regards Dirk
Yes, you have to mark the field as a 'logged field' in order to be able to search for process instances with a certain value in this field. Use services from the package WmMonitor.
This is too long, so I used answer instead of comment.
As of my understanding, you have some data, for simplicity assume, that just one string represents the data. Let say this is in a DB table. You should have flag in same table - processed with values true/false (in DB typically 0/1) and there should be a scheduler creating processes for unprocessed data only.
So the pseudo code would be:
retrieve unprocessed data
for each record try to create process
if creation was successful, mark the data as processed and continue
in this scenario you do not need to check whether process was started already...
Feel free to comment if something is unclear ;-)
I'm looking for some pointers in creating an SSIS based workflow that reads a list of tables at run time from a database and then uses each of these as ADO inputs, selects specific columns from each table and then adds these to a staging area. I've had a quick play with the union task but was looking for some pointers in terms of direction to take ?
I can't seem to find anything on the net that does what I need and am not sure if SSIS can bend to suit my needs.
Many thanks in advance.
You can do this but the only method I can think of is a little convoluted.
You would need to use a "for each loop container" to loop through your list of tables & read each table name into an SSIS variable.
Within the "foreach":
add a script task to build your actual query into another SSIS variable.
add a data flow
within the Data Flow use a source of "SQL Command from variable".
do data flow "stuff"
I hope this makes some kind of sense? :-)
I'd like to make a table that will keep track of a separate updating table on a day to day basis. For example, I have a table currently that keeps track of inventory and once a day I'd like to run a report that gives me information like how many new items were added, how many items were sold etc, and have each of those queries be stored as separate columns in the table. Is this possible? I've done some research trying to find a solution but haven't had any luck yet.
way 1: use database trigger, which could issue an event when you insert/update/delete a line,
way 2: in your code, like java, keep track of the insert/remove event in a memory counter(you can use spring aop to detect event, and use memory or memcache to keep the numbers), and use a scheduled program to write the data to a table, and reset them every day,(in java, jdk provide there class, or you can use quartz framework),