have a couple of programs of the form ![Picture Attached][1]
InputQueue --->
EventLoop(InputPort& in, OutputPort& out)
{
Attr a;
while (true)
{
a = in.Get();
...
do somthing with a
...
out.Send(a);
}
}
---> Output Queue
As depicted they have a input queue and output queue .
The queues are out side the program i.e they are not built into the program event loop
The task of second program is to read from first program output queue and write processed output to its Output Queue.
C++ Implementation on Solaris is what i am looking for.
Related
I have a scenario where we have two MQ listeners in our application. One of them is doing some added processing(database tables update), day queue A and the other one is not, say queue B. The issue is we just have one thread(the main thread for both of these) and message is sent to the A first. So by the time the code reaches the point where it’s going to process/update message received on A, message on B arrives and hence the update never goes through. How can I make sure that the processing occurs for messages on A even while B received messages?
Thanks
If you must use one thread to process all messages you should use synchronous api calls like this:
long timeoutForA, timeoutForB ...
MessageConsumer consumerA, consumerB ....
while (true) {
Message msgFromA = consumerA.receive(timeout);
if (msgFromA == null)
break;
... do something with message from A ...
}
while (true) {
Message msgFromB = consumerB.receive(timeout);
if (msgFromB == null)
break;
... do something with message from B ...
}
However I would not recommend this approach to business logic as in general. Properly designed messaging system should be able process unrelated messages asynchronously.
It depends on language that you use.
If you use C you can try to use callbacks.
https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q023050_.htm
I would like to use libpcap to capture on multiple specific interfaces (not 'any') to the same file
I have the following code (error handling and some args removed):
static gpointer pkt_tracing_thread(gpointer data)
{
while (1)
{
pcap_dispatch(g_capture_device1, .., dump_file1);
pcap_dispatch(g_capture_device2, .., dump_file2);
}
}
fp1 = calloc(1, sizeof(struct bpf_program));
fp2 = calloc(1, sizeof(struct bpf_program));
cap_dev1 = pcap_open_live(interface1,...
cap_dev2 = pcap_open_live(interface2,...
pcap_compile(cap_dev1, fp1, ...
pcap_compile(cap_dev2, fp2, ...
pcap_setfilter(cap_dev1, fp1);
pcap_setfilter(cap_dev2, fp2);
dump_file1 = pcap_dump_open(g_capture_device1, filename);
dump_file2 = pcap_dump_open(g_capture_device2, filename);
g_thread_create_full(pkt_tracing_thread, (gpointer)fp1, ...
g_thread_create_full(pkt_tracing_thread, (gpointer)fp2, ...
This does not work. What I see in filename is just packets on one of the interfaces. I'm guessing there could be threading issues in the above code.
I've read https://seclists.org/tcpdump/2012/q2/18 but I'm still not clear.
I've read that libpcap does not support writing in pcapng format, which would be required for the above to work, although I'm not clear about why.
Is there any way to capture multiple interfaces and write them to the same file?
Is there any way to capture multiple interfaces and write them to the same file?
Yes, but 1) you have to open the output file only once, with one call to pcap_dump_open() (otherwise, as with your program, you may have two threads writing to the same file independently and stepping on each other) and 2) you would need to have some form of mutex to prevent both threads from writing to the file at the same time.
Also, you should have one thread reading only from one capture device and the other thread reading from the other capture device, rather than having both threads reading from both devices.
As user9065877, you have to open the output file only once and write to it only from one thread at a time.
However, since you'd be serializing everything anyway, you may prefer to ask libpcap for pollable file descriptors for the interfaces and poll in a round-robin fashion for packets, using a single thread and no mutexes.
If I log in SAP R/3 and execute the transaction code MM60 then it will show some UI screen for Material list and ask for material number. If I specify a material number and execute then it will show me the output i.e. material list.
Here the story ends if I am a SAP R/3 user.
But what if I want to do the same above steps using java program and get the result in java itself instead of going to SAP R/3? I want to do this basically because I want to use that output data for BI tool.
Suppose I am using JCO3 for connection with R/3.
EDIT
Based on the info in the link I tried to do something like below code but it does not schedule any job in background nor it downloads any spool file, etc.
I've manually sent a doc to spool and tried giving its ID in the code. This is for MM60.
JCoContext.begin(destination);
function = mRepository.getFunction("BAPI_XBP_JOB_OPEN");
JCoParameterList input = function.getImportParameterList();
input.setValue("JOBNAME", "jb1");
input.setValue("EXTERNAL_USER_NAME", "sap*");
function.execute(destination);
JCoFunction function2 = mRepository.getFunction("BAPI_XBP_JOB_ADD_ABAP_STEP");
function2.getImportParameterList().setValue("JOBNAME", "jb1");
function2.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function2.getImportParameterList().setValue("ABAP_PROGRAM_NAME", "RMMVRZ00");
function2.getImportParameterList().setValue("ABAP_VARIANT_NAME", "KRUGMANN");
function2.getImportParameterList().setValue("SAP_USER_NAME", "sap*");
function2.getImportParameterList().setValue("LANGUAGE", destination.getLanguage());
function2.execute(destination);
function3.getImportParameterList().setValue("JOBNAME", "jb1");
function3.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function3.getImportParameterList().setValue("EXT_PROGRAM_NAME", "RMMVRZ00");
function3.getImportParameterList().setValue("SAP_USER_NAME", "sap*");
function3.execute(destination);
JCoFunction function4 = mRepository.getFunction("BAPI_XBP_JOB_CLOSE");
function4.getImportParameterList().setValue("JOBNAME", "jb1");
function4.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function4.execute(destination);
JCoFunction function5 = mRepository.getFunction("BAPI_XBP_JOB_START_ASAP");
function5.getImportParameterList().setValue("JOBNAME", "jb1");
function5.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function5.execute(destination);
JCoFunction function6 = mRepository.getFunction("RSPO_DOWNLOAD_SPOOLJOB");
function6.getImportParameterList().setValue("ID", "31801");
function6.getImportParameterList().setValue("FNAME", "abc");
function6.execute(destination);
You cannot execute an SAP transaction through JCo. What you can do, is run remote-enabled function modules. So you need to either write a function module of your own, providing exactly the functionality you require, or find an SAP function module, that does what you need (or close enough to be useful).
Your code has the following issues:
XBP BAPIs can only be used if you declare their usage via BAPI_XMI_LOGON and BAPI_XMI_LOGOFF. Pass the parameters interface = 'XBP', version = '3.0', extcompany = 'any name you want'.
You start the program RMMVRZ00 (which corresponds to the program directly behind the transaction code MM60) with the program variant KRUGMANN which is defined at SAP side with a given material number, but your goal is probably to pass a varying material number, so you should first change the material number in the program variant via BAPI_XBP_VARIANT_CHANGE.
After calling BAPI_XBP_JOB_OPEN, you should read the returned value of the JOBCOUNT parameter, and pass it to all subsequent BAPI_XBP_JOB_* calls, along with JOBNAME (I mean, two jobs may be named identically, JOBCOUNT is there to identify the job uniquely).
After calling BAPI_XBP_JOB_START_ASAP, you should wait for the job to be finished, by repeatedly calling BAPI_XBP_JOB_STATUS_GET until the job status is A (aborted) or F (finished successfully).
You hardcode the spool number generated by the program. To retrieve the spool number, you may call BAPI_XBP_JOB_SPOOLLIST_READ which returns all spool data of the job.
Moreover I'm not sure whether you may call the function module RSPO_DOWNLOAD_SPOOLJOB to download the spool data to a file on your java computer. If it doesn't work, you may use the spool data returned by BAPI_XBP_JOB_SPOOLLIST_READ and do whatever you want.
In short, I think that the sequence should be:
BAPI_XMI_LOGON
BAPI_XBP_VARIANT_CHANGE
BAPI_XBP_JOB_OPEN
BAPI_XBP_JOB_ADD_ABAP_STEP
BAPI_XBP_JOB_CLOSE
BAPI_XBP_JOB_START_ASAP
Calling repeatedly BAPI_XBP_JOB_STATUS_GET until status is A or F
Note that it may take some time if there are many jobs waiting in the SAP queue
BAPI_XBP_JOB_SPOOLLIST_READ
Eventually RSPO_DOWNLOAD_SPOOLJOB if it works
BAPI_XMI_LOGOFF
Eventually BAPI_TRANSACTION_COMMIT because XMI writes an XMI log.
currently I've got some bunch of tcl files. in the tcl files, especially in the one tcl, I found the below a proc function in the tcl.
proc ahb_write {addr data {str s}} {
set ahbm top.cpu_subsys
...
if {$::verbose > 0} {
}
silent {
...........
...........
delay 1
So I want to invoke and run this ahb_write proc function when I run the simulation.
Is there any possible way to run the proc function when I run the simulation with verilog?
You would need the SystemVerilog DPI to do this in any simulator. In Modelsim, you would call the function mti_fli::mti_com("command") An alternative that would probably work in any simulator is to to have a command executed upon hitting a breakpoint.
I have done this before where I wanted to use a verilog task that would inject bit errors on a memory. In NCSim, I had to first individually deposit the values for the parameters of the task and then call the task itself.
deposit tinst.u_buffer.u_fifo.u_sram_0.injectSA.addr 1
deposit tinst.u_buffer.u_fifo.u_sram_0.injectSA.bitn 2
deposit tinst.u_buffer.u_fifo.u_sram_0.injectSA.typen 1
task tinst.u_buffer.u_fifo.u_sram_0.injectSA
run 0.1
I don't know for sure if the 'run 0.1' was necessary or not, but I know this at least worked in my example.
The verilog task was defined in the RAM model like this:
task injectSA;
input [numWordAddr-1:0] addr;
input integer bitn;
input typen;
...
I have created a simple thread to continuously display a message box till the user deos not want to some operation. Following is the code:
thread::create { while [tk_messageBox -message "Do you want to Exit?!!" -type yesno] {
doSomething
}}
But there is no message box displayed although the thread is created.
How can I really see these messageboxes?
You need to make Tk be present in the thread as well; only the Thread package is present by default in subordinate threads:
thread::create {
package require Tk
while [tk_messageBox -message "Do you want to Exit?!!" -type yesno] {
doSomething
}
}
Also, you need to fix a bunch of other problems in your code.
Always put the condition of a while in {braces}. Without that, the dynamic parts of the expression will only be evaluated once, which really isn't what you ever want with a while.
Make sure your thread does thread::wait, as that enables improved process and thread management. Your message box loop needs to be rewritten entirely.
This might lead to this code:
thread::create {
package require Tk
proc periodicallyMaybeDoSomething {} {
if {[tk_messageBox -message "Do you want to Exit?!!" -type yesno]} {
thread::exit
}
doSomething
# pick a better delay maybe?
after 1 periodicallyMaybeDoSomething
}
after 1 periodicallyMaybeDoSomething
thread::wait
}
If you're using 8.6, you may be able to use coroutines to make the code more elegant.