In my current project I need to check if a process Instance already exist. I should be able to check this against a value which is stored in the pipeline.
In WmMonitor I tried a few services but with none I was able to find the existing process.
Do I need to mark the pipeline variable as logged field to be able to find the process instance?
Mayeb someone can share an example?
Regards Dirk
Yes, you have to mark the field as a 'logged field' in order to be able to search for process instances with a certain value in this field. Use services from the package WmMonitor.
This is too long, so I used answer instead of comment.
As of my understanding, you have some data, for simplicity assume, that just one string represents the data. Let say this is in a DB table. You should have flag in same table - processed with values true/false (in DB typically 0/1) and there should be a scheduler creating processes for unprocessed data only.
So the pseudo code would be:
retrieve unprocessed data
for each record try to create process
if creation was successful, mark the data as processed and continue
in this scenario you do not need to check whether process was started already...
Feel free to comment if something is unclear ;-)
Related
Is it possible to materialize the edits made as part of a scenario into a dataset in foundry?
I want for each scenario to write out the primary keys of the objects edited as part of the scenario.
The motivation is that I need to run multiple processes to compute metrics as part of the changed values for each scenario, at a scale and runtime that is not possible to do with Functions.
Edit with details:
The thing is that I am not doing actual edits to the objects for the object type, I don't want to apply it.
I tested out the "Action Log" and it does not seem like this picks up "uncommitted" actions, meaning actions that is just run as part of a scenario. Also, it does not seem to be a link to the scenario it was a part of, even if the changes were committed.
The workflow is that I have Object Type A, and I define multiple scenarios S on a subset of the objects in A.
Each scenario might make something like 50k edits to a subset of A, through multiple Actions backed by a Function.
I save some of the scenarios. Now I am able to load these scenarios and "apply" them on A again in Workshop.
However I need to be able to get all the primary keys, and the edited values of A materialized into a dataset (for each scenario), as I need to run some transformation logic to compute a metric for the change as part of each scenario (at a scale and execution time not possible in Functions).
The Action Log did not seem to help a lot for this. How do I get the "edits" as part of a saved scenario into a dataset?
The only logic you can run BEFORE applying will be functions.
Not sure about your exact logic but Function's Custom Aggregations can be very powerful: Docs here
this might not directly let you calculate the Diff but you could use the scenario compare widgets in workshop to compare your aggregation between multiple Scenarios
e.g. you have a function that sums(total profit)
Your Workshop could show:
Current Data:
$10k
Scenario A:
$5k
Scneario B:
$13k
instead of like:
Scenario A:
-$5k
Scenario B:
+$3k
Afaik there's no first class way of doing this (yet).
"Applying" a scenario basically means you're submitting the actions queued on the scenario to the ontology. So neither the actions nor the ontology are aware that they came from a scenario.
What I've been doing to achieve what you're working on is using the "Action Log". It's still in Beta so you might need to ask for it to be enabled. It will allow you on each action to define a "log" object to be created that can track the pks of your edited objects per Action.
How I do the action log is:
My Action log has the "timestamp" of the action when they were run.
My Scenarios have the "timestamp" of when it was applied.
Since "Applying a Scenario" means -> actually running all actions on Ontology (underlying data) this gets me this structure if I sort everything by timestamp:
Action 1
Action 2
Scenario A applied
Action 3
Action 4
Scenario B applied
this allows you to do a mapping later on of Action 1/2 must come from Scenario A and Action 3+4 come from Scenario B.
EDIT: Apparently you might be able to use the Scenario RID directly in the Action Logs too (which is a recent addition I haven't adopted yet)
This won't allow you tho to compute (in transforms...) anything BEFORE applying a scenario tho
When I try to execute the transaction code DP80 (creating a quotation from a PM order) I get an error message number V1320 (in French, since I use the French version of SAP): it is asking for item category.
I so far have found that the solution is to define the Item Category via the transaction VOV4. Here's the image of the view I got by executing that transaction code. I'm supposed to enter the same value (SEIN) in the selected row, but I don't know how to do it because I can't type in it.
You can't edit a field that is a key. In this case you should select a row, copy it, modify the entry, press ENTER and the save the table.
Hope it helps.
The said error will come in case any value is set in CMIR. As you already found out the relevant transaction VOV4, you have copy the existing data and upon saving, one transport request would be generated, provided if you have authorization to do so. Else, ask some of your team members who have this authorization and do it.
By the way, do you maintain Customer Material Info record where this field (Usage) will come into picture
I am working on a small application in Access Services on SharePoint to log colleagues leave requests, and I need to work out a data macro to calculate how many days of leave they have remaining from their allowance.
I have a table [Colleagues] with all of the user data, for simplicity I'll reduce it to [Email] and [Allowance] in days. I have another table which stores the requests [Requests] including the number of days to deduct in each approved leave request [Days Requested].
I have set up a query that returns all approved requests for the colleague and I would like to use a data macro that is triggered to run when the colleague logs in. As you cannot use aggregate functions in Web Applications, I am currently using ForEachRecord in the query to total the number of deductible days, however I cannot work out how to return that to a field in the [Colleague] record.
According to the Access help, I should be able to set the value to a LocalVar and use it in expressions as simply as referencing [Deductible Days], however this is not working.
Any help?
I finally worked this out after much tinkering.
In my query I included the [Colleague Email] field as well as the [Days Requested] field, and then when my Application loads it navigates to a form created from the [Colleagues] table. I have modified the Data Source of the form to link the [Email] field in the query results to the [Email] field in the [Colleagues] form.
Following this I was able to create an unbound textbox with the data source =Sum([Days Requested]) referring to the relevant field in the query. Voila! I now have the value to play around with in my application.
Hope that helps, took a lot of fiddling around. No data macros needed after all, but its a method I shall remember in future, opens up a lot of possibilities.
If I understand your situation correctly, I was faced with a very similar problem.
I believe the solution used here will work for you. It involves using a query to Sum up the values (we would use Sum where he used Count), use a Data Macro to run the query and then have have an On Insert/On Update trigger the Data Macro:
http://devspoint.wordpress.com/2014/03/26/validating-data-with-data-macros-in-access-services-2013/
Let me know if this works for you. It worked for me!
I am writing the SSIS package to import the data from *.csv files to the SQL 2008 DB. The problem is that one of the file contains the duplicate records in the csv file and I want to extract only the distinct values from that source. Please see the image below.
Unfortunately, the generated files are not under my control and it is owned by the third party and I could not change the way they generated.
I did use the LookUp Component. But it only checks the existing data against the incoming data. It does not check the duplicate records in the incoming data.
I believe the sort component gives an option to remove duplicate rows.
Depends on how serious you want to get about the duplicates. Do you need a record of what was duplicated or is it enough to just get rid of them? Sort component will get rid of dups on the sort field. However, the dups may have different data in the other fields and then you want a differnt strategy. Usually I load all to staging tables and clean up from there. I send the dupes removed to an exception table (we have to answer a lot of questions from our customers about why things don't match what they sent) and I often use a set of business rules (and use either an execute SQl or data flow tasks to enforce the rules) to determine which one to pick if there are duplicates in one area but not another (say two business addresses when we can only store 1). I also make sure the client is aware of how we determine which of the two to pick.
Use SORT tool for that from Toolbox, then click on it. You will get all available input columns.
Check the column and change sortType direction and then check "remove rows with duplicate sort value".
Bring in the data from the csv file the way it is, then dedup it after it's loaded.
It'll be easier to debug, too.
I used Aggregate Component and Group By both QualificationID and UnitID. If you want, you can also use Sort Component too. Perhaps, my information might help others.
While working with DB, I find useful using some tools, that help me to solve DB problems.
Some of them are:
1) Insert generator
2) A tool that can execute a script on a list of DB's
3) Finding a text in stored procedures and functions.
4) DB Back up scripts
My question is, what are most useful tools, scripts(anything else), that help you to work with SQL Server?
Thanks in advance.
UPDATE
I assume, there are no other tools for SQL Server 2008 or any other version?
Redgate has a collection of quite powerful tools for Sql Server.
Check out the SMSS Tools Pack.
I have stored procedures that do the following:
system utilities:
find and list every occurrence and info about a given column name or partial column name
find and list every occurrence and info about a given object name or partial object name
list out all the information for a given table, all columns, computed columns, column data types, nullability, defaults, identity, check constraints, index, pk, fk, triggers, and column comments.
find every trigger, view, stored procedure, or function that contains a given string
business utilities:
I also make stored procedures that work with business info. When working on an area of our application I'll make a procedure that displays out all the related info of a given thing. I'll usually display all the info using multiple PRINTs and SELECTs for everything that can join to the given PK (not if there are thousands or rows though). For example, one utiltiy would take a DoctorID as a parameter and list out all the doctor's info, offices that they work at, insurance they accept, etc. I like to include the table names in the output so I can remember where the data comes from without looking at the code. I also join in all the codes tables in these displays, so I'm not looking "A" but "Active (A)". After working on a system for a while, I have loads of these utilities, which greatly helps when a support call comes up or you need to debug a problem, etc. I usually build these as I develop, it is difficult to find time to go back and make them.