Table for complaint storage statuses? - sap-erp

I am trying to download SAP QPRS table to get a list of [Physical sample , Material number, batch & Physical sample status (sample was stored & sample exist)]. I am using SAP .Net Connector to call the function module RFC_READ_TABLE to read the table QPRS.
But I cannot find the screen fields RQPRS-PRNVX and RQPRS-AUFBX that are related to physical sample status. Is it a structure, if so, how can I use RFC_READ_TABLE function module to ready these data.
I know I can use function modules to get physical sample status, but this is done material by material, which is very slow. I now to get whole table contact once to make it faster.
NB. Data needed is into SAP Tcode QPR3, in "Sample Storage" tab:

The flags are stored as statuses in the table JEST:
Sample Was Stored: status I0363 (short code in Status History: "STRD")
Sample Consumed/Destroyed: status I0362 (short code in Status History: "USED")
The table JEST has 3 important columns:
OBJNR: "QP" followed with the sample number (12 digits, right-aligned, padded with zeroes on the left)
STAT: status
INACT: " " means that the status is set, "X" means that the status is not set

Related

JMeter : How to read particular row data in csv file based on a column value?

I am new to Jmeter and doing a POC to do a load test on a web application.
What I am trying to do:
I have a total of 4 user logins(surgeons). Each Login is associated with 'n' number of patients.
I've created 2 CSV files
one with the user login and password for surgeons
another CSV file that contains the PatientName, PatientID and the Surgeon associated with that Patient like below.
PatientName,PatientId,loginName
Pa1,PID1,user1
Pa2,PID2,user1
Pa3,PID3,user1
Pa4,PID4,user1
Pa5,PID5,user2
Pa6,PID6,user2
Pa7,PID7,user3
Pa8,PID8,user4
My Scenario:
Login as User.
Navigate to Each Patient Dashboard as per their associations.
log out of the application.
My Testplan
Thread Group (4 users, Ramp up time as 1 sec, 1 loop) -csv1(with username, password )
-Login Page and Navigate to the Main page - RunTime Controller (To sustain the load of a set amount of time)
-- While Loop(to loop between the patient dashboard of the surgeon/user logged in) ---CSV2 (the data as shown above) ----Navigate to Dashboard
----Navigate to Main
-Log out of the Application
What I want to achieve:
I want to use the single thread group and run it concurrently for all the 4 users. In this process, once the user login, the user should only those patient data from the CSV which are associated.
For Ex: When the Thread1 is running with User1 login, he should only able to loop through Pa1, Pa2, Pa3, Pa4 users When the thread2 is running with User2 login, user should only read the Pa5, Pa6 data.
Like this, each user login should only pick those users as per their associations mentioned above.
Is there any way, I can use this single CSV2 file and achieve this task? so that I don't have to create n number of the thread of n numbers of logins with n number CSV files each containing the data specific to the user login.
I did try to use the _CSVread function but that will make me to create multiple files(I currently have 500 CSV files) which is not a great idea. Expecting to find a solution to have all the data in one CSV and read it based on the Column value.
Reading data from CSV file based on particular column value is not supported in JMeter, you can consider the following options:
Create separate CSV files for each surgeon and pick up the relevant file based on currently logged surgeon id/name/whatever using __CSVRead() function.
Use If Controller to choose to this or that execution branch based on the surgeon name
Use Switch Controller to choose this or that execution branch based on the surgeon name

SSIS consolidate and concatenate multiple rows into single rows without using SQL

I am trying to accomplish something that is pretty easy to do in SQL, but seemingly very challenging to do in SSIS without using SQL. Basically, I need to consolidate and concatenate a field of a many-to-one relationship.
Given entities: [Contract Item] (many) to (one) [Account]
There is a field [ari_productsummary] that contains the product listed on the Contract Item entity. We want to write that value to the Account as [ari_activecontractitems]. However, an Account may have more than one Contract Item record associated to it, in which case, we want to concatenate those values. We also only want the distinct values to be concatenated (distinct rows already solved within my data flow).
This can be accomplished by writing to a temporary table, and then using a query or view to obtain the summarized results as followed. I created a SQL table called TESTTABLE that contains the [ari_productsummary] from the Contract Item entity along with the referring [accountid] to map it back to Account. I then wrote the following query as a view:
SELECT distinct accountid,
(SELECT TT2.ari_productsummary + '; '
FROM TESTTABLE TT2
WHERE TT2.accountid = TT.accountid
FOR XML PATH ('')
) AS 'ari_activecontractitems'
FROM TESTTABLE TT
Executing that Query provides me the results that I want, which I can then use for importing into the Account entity as shown below:
But how do I do this in a SSIS dataflow without writing to a SQL table as a temporary placeholder for the data?? I want to do the entire process inside one dataflow container, without using a temporary SQL table/view. The whole summarization process needs to be done on the fly:
Does anyone have a solution that doesn't require a temporary SQL table/view/query, but is contained entirely within the data flow?
I am using VS 2017 and the KingswaySoft Dynamic CRM 365 ETL toolset to develop my solution/package.
Spit balling here as I don't Dynamics nor do I have the custom components.
Data Flow 1 - Contract aggregation
The purpose of this data flow is to replicate your logic in the elegant query you provided and shove that into a Cache Connection Manager (see Notes for 2008+ at the end)
KingswaySoft Dynamics Source -> Script Task -> Cache Transform
If you want to keep the sort in there, do it before the script task. The implementation I'll take with the Script Task is that it's fully blocking - that is all the rows must arrive before it can send any on. Tasks like the Merge Join are only partially blocking because the requirement of sorted data means that once you no longer have a match for the current item, you can send it on down the pipeline.
The Script Task is going to be asynchronous transformation. You'll have two output columns, your key accountid and your new derived column of ari_activecontractitems. That column will might need to be big - you'll know your data best but if it's a blob type in Dynamics (> 4k unicode or > 8k ascii characters) then you'll have to define the data type as DT_TEXT/DT_NTEXT
As inputs, you'll select accountid and ari_productsummary from your source.
The code should be pretty easy. We're going to accumulate the inbound data into a Dictionary.
// member variable
Dictionary<string, List<string>> accumulator;
The PreProcess method, we'll tack this in there to initialize our variable
// initialize in PreProcess method
accumulator = new Dictionary<string, List<string>>();
In the OnBufferRowSent (name approx)
// simulate the inbound queue
// row_id would be something like Rows.row_id
if (!accumulator.ContainsKey(row_id))
{
// Create an empty dictionary for our list
accumulator.Add(row_id, new List<string>());
}
// add it if we don't have it
if (!accumulator[row_id].Contains(invoice))
{
accumulator[row_id].Add(invoice);
}
Once you get the signal sent of no more data available, that's when you start buffering output data. The auto generated code will have placeholders for all this.
// This is how we shove data out the pipe
foreach(var kvp in accumulator)
{
// approximately thus
OutputBuffer1.AddRow();
OutputBuffer1.row_id = kvp.Key;
OutputBuffer1.ari_productsummary = string.Join("; ", kvp.Value);
}
We have an upcoming release that comes with a component that does exactly what you are trying to achieve without the need of writing custom code. The feature is currently under preview, please reach out to us for private access to the feature. You can find our contact information on our website.
UPDATE - June 5, 2020, we have made the components available for public access at https://www.kingswaysoft.com/products/ssis-productivity-pack/ as a result of our 2020 Release Wave 1. We have two components available that serve this kind of purpose. The Composition component will take input values and transform into a composite value in a SSIS column. The Decomposition component does the opposite, it would take an input value and split it into multiple rows using either delimiter-based text splitting or XML/JSON array splitting.

How to Implement logging at the end of each job In talend?

I am new to Talend os.
However, I received a task:
Create file delimited .csv metadata (one for Lead & Opportunity).
Move files to your repository on the AWS server (the etl_process1 login).
Create two tables sfdc_leads_reporting_raw and sfdc_opp_reporting_raw.
Load the data from the files into the tables. Ensure the data types are correctly used when creating metadata schemas & tables.
Till step 4 I am done.
Now the problem is:
How to Implement logging at the end of each job to report the number of leads (count of distinct id in leads table) and number of opportunities created (count of opportunity id) by stages (how many converted, qualified, closed won, and dead)?
Help would be appreciated.
You can get this data using global variables, in a subjob at the end of your job. Most components provide a global variable called tComponent_NB_LINE (or _NB_LINE_INSERTED for database components) that gives you the number of lines output by the component.
For instance tFileOutputDelimited_1_NB_LINE or tOracleOutput_1_NB_LINE_INSERTED.
Using these variables you can log into console or file.
Here is a simple example. If you have a tOracleOutput_1 in your job you can do:
tPostJob -- OnComponentOk -- tFixedFlowInput -- Main -- tLogRow
Inside tFixedFlowInput you retrieve the variable
(Integer)globalMap.get("tOracleOutput_1_NB_LINE_INSERTED")`.
If you need to log aggregated info, you can append a tAggregateRow to your output components, and use tSetGlobalVar to get count by certain criteria.

Need to identify dynamic text value in selenium python

I am new to selenium webdriver python . I need to automate back end process of an e-commerce application ( order processing)
here the orders are classified to three types :
say A-type , B-type and C-type .
After successful order completion the orders will listed in an interface .
In all listed orders, its order type will mentioned like
"ORDER TYPE:A-type"
According to these three types certain scenarios will be executed. I need to identify this type. In a page there may be more than one A-type/B-type/C-type orders.
here the orders will be listing one by one will all user details along with order type. please help in this.
its html tag details:
order Type: A-type

Retrieving All Payments From Sage Line 50 Database

I'm trying to retrieve a list of all payments received/entered into a Sage Line 50 database. The company I work for are members of the Sage developer program, so I have access to the SDK, help files and the like but I have been unable to find any specific information regarding payments.
Some of the .dta files contain references to payments (SPLITS.DTA & HEADER.DTA) alongside Invoice rows.
Does anyone know whether or not there is a separate file which contains only payment information, and if so what is it? Alternatively, will I have to pull the full list of rows from the SPLITS/HEADER files and filter them by type?
Many thanks in advance
I pulled data from the Header and Split files for a test customer this afternoon, and they contain (as near as I can tell) all customer activity - Invoices, Invoice payments and Credits are all reflected in both data files (the split data file containing more in depth data) and can be filtered by bank_code and transaction type.
To get the data - first create a reference to a customer object and from there link to all of the header (assuming you have an existing connection and workspace).
dynamic workspace = this._workspaces[workspaceName];
dynamic customer = workspace.CreateObject("SalesRecord");
bool added = customer.AddNew();
customer.MoveFirst(); //find first customer
dynamic headerObject = customer.Link;
bool headerFound = headerObject.MoveFirst(); //use .MoveNext() to cycle headers
You can then pull data from the header object using :
string AccountRef = headerObject.Fields.Item("ACCOUNT_REF").Value;
Where ACCOUNT_REF is a field in the HeaderData object.
Use the following code to get split data
dynamic splitObject = headerObject.Link;
bool splitFound = splitObject.MoveFirst() //and so on