Use sprint number in release name in VSTS - azure-pipelines-release-pipeline

I want to add the sprint number to the name of the release. Currently I'm using date and release number (incremental) as you can see in the following line
$(date:yyyyMMdd)_$(rev:_r)
I want to add the sprint number to this variable to have something like
sprintNumber_$(date:yyyyMMdd)_$(rev:_r)
How can I get the sprintNumber for this purpose?

I've already resolved using the VSTS API and getting the iterations from the query Get Team Iterations and after that invoking that query from a powershell script and then assigning the selected sprint to the Releasename variable.
Get Team Iterations

Related

How to run sap r/3 transactions through JCO3? or execute reports through JCO?

If I log in SAP R/3 and execute the transaction code MM60 then it will show some UI screen for Material list and ask for material number. If I specify a material number and execute then it will show me the output i.e. material list.
Here the story ends if I am a SAP R/3 user.
But what if I want to do the same above steps using java program and get the result in java itself instead of going to SAP R/3? I want to do this basically because I want to use that output data for BI tool.
Suppose I am using JCO3 for connection with R/3.
EDIT
Based on the info in the link I tried to do something like below code but it does not schedule any job in background nor it downloads any spool file, etc.
I've manually sent a doc to spool and tried giving its ID in the code. This is for MM60.
JCoContext.begin(destination);
function = mRepository.getFunction("BAPI_XBP_JOB_OPEN");
JCoParameterList input = function.getImportParameterList();
input.setValue("JOBNAME", "jb1");
input.setValue("EXTERNAL_USER_NAME", "sap*");
function.execute(destination);
JCoFunction function2 = mRepository.getFunction("BAPI_XBP_JOB_ADD_ABAP_STEP");
function2.getImportParameterList().setValue("JOBNAME", "jb1");
function2.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function2.getImportParameterList().setValue("ABAP_PROGRAM_NAME", "RMMVRZ00");
function2.getImportParameterList().setValue("ABAP_VARIANT_NAME", "KRUGMANN");
function2.getImportParameterList().setValue("SAP_USER_NAME", "sap*");
function2.getImportParameterList().setValue("LANGUAGE", destination.getLanguage());
function2.execute(destination);
function3.getImportParameterList().setValue("JOBNAME", "jb1");
function3.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function3.getImportParameterList().setValue("EXT_PROGRAM_NAME", "RMMVRZ00");
function3.getImportParameterList().setValue("SAP_USER_NAME", "sap*");
function3.execute(destination);
JCoFunction function4 = mRepository.getFunction("BAPI_XBP_JOB_CLOSE");
function4.getImportParameterList().setValue("JOBNAME", "jb1");
function4.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function4.execute(destination);
JCoFunction function5 = mRepository.getFunction("BAPI_XBP_JOB_START_ASAP");
function5.getImportParameterList().setValue("JOBNAME", "jb1");
function5.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function5.execute(destination);
JCoFunction function6 = mRepository.getFunction("RSPO_DOWNLOAD_SPOOLJOB");
function6.getImportParameterList().setValue("ID", "31801");
function6.getImportParameterList().setValue("FNAME", "abc");
function6.execute(destination);
You cannot execute an SAP transaction through JCo. What you can do, is run remote-enabled function modules. So you need to either write a function module of your own, providing exactly the functionality you require, or find an SAP function module, that does what you need (or close enough to be useful).
Your code has the following issues:
XBP BAPIs can only be used if you declare their usage via BAPI_XMI_LOGON and BAPI_XMI_LOGOFF. Pass the parameters interface = 'XBP', version = '3.0', extcompany = 'any name you want'.
You start the program RMMVRZ00 (which corresponds to the program directly behind the transaction code MM60) with the program variant KRUGMANN which is defined at SAP side with a given material number, but your goal is probably to pass a varying material number, so you should first change the material number in the program variant via BAPI_XBP_VARIANT_CHANGE.
After calling BAPI_XBP_JOB_OPEN, you should read the returned value of the JOBCOUNT parameter, and pass it to all subsequent BAPI_XBP_JOB_* calls, along with JOBNAME (I mean, two jobs may be named identically, JOBCOUNT is there to identify the job uniquely).
After calling BAPI_XBP_JOB_START_ASAP, you should wait for the job to be finished, by repeatedly calling BAPI_XBP_JOB_STATUS_GET until the job status is A (aborted) or F (finished successfully).
You hardcode the spool number generated by the program. To retrieve the spool number, you may call BAPI_XBP_JOB_SPOOLLIST_READ which returns all spool data of the job.
Moreover I'm not sure whether you may call the function module RSPO_DOWNLOAD_SPOOLJOB to download the spool data to a file on your java computer. If it doesn't work, you may use the spool data returned by BAPI_XBP_JOB_SPOOLLIST_READ and do whatever you want.
In short, I think that the sequence should be:
BAPI_XMI_LOGON
BAPI_XBP_VARIANT_CHANGE
BAPI_XBP_JOB_OPEN
BAPI_XBP_JOB_ADD_ABAP_STEP
BAPI_XBP_JOB_CLOSE
BAPI_XBP_JOB_START_ASAP
Calling repeatedly BAPI_XBP_JOB_STATUS_GET until status is A or F
Note that it may take some time if there are many jobs waiting in the SAP queue
BAPI_XBP_JOB_SPOOLLIST_READ
Eventually RSPO_DOWNLOAD_SPOOLJOB if it works
BAPI_XMI_LOGOFF
Eventually BAPI_TRANSACTION_COMMIT because XMI writes an XMI log.

Google Dataflow (Apache beam) JdbcIO bulk insert into mysql database

I'm using Dataflow SDK 2.X Java API ( Apache Beam SDK) to write data into mysql. I've created pipelines based on Apache Beam SDK documentation to write data into mysql using dataflow. It inserts single row at a time where as I need to implement bulk insert. I do not find any option in official documentation to enable bulk inset mode.
Wondering, if it's possible to set bulk insert mode in dataflow pipeline? If yes, please let me know what I need to change in below code.
.apply(JdbcIO.<KV<Integer, String>>write()
.withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create(
"com.mysql.jdbc.Driver", "jdbc:mysql://hostname:3306/mydb")
.withUsername("username")
.withPassword("password"))
.withStatement("insert into Person values(?, ?)")
.withPreparedStatementSetter(new JdbcIO.PreparedStatementSetter<KV<Integer, String>>() {
public void setParameters(KV<Integer, String> element, PreparedStatement query) {
query.setInt(1, kv.getKey());
query.setString(2, kv.getValue());
}
})
EDIT 2018-01-27:
It turns out that this issue is related to the DirectRunner. If you run the same pipeline using the DataflowRunner, you should get batches that are actually up to 1,000 records. The DirectRunner always creates bundles of size 1 after a grouping operation.
Original answer:
I've run into the same problem when writing to cloud databases using Apache Beam's JdbcIO. The problem is that while JdbcIO does support writing up to 1,000 records in one batch, in I have never actually seen it write more than 1 row at a time (I have to admit: This was always using the DirectRunner in a development environment).
I have therefore added a feature to JdbcIO where you can control the size of the batches yourself by grouping your data together and writing each group as one batch. Below is an example of how to use this feature based on the original WordCount example of Apache Beam.
p.apply("ReadLines", TextIO.read().from(options.getInputFile()))
// Count words in input file(s)
.apply(new CountWords())
// Format as text
.apply(MapElements.via(new FormatAsTextFn()))
// Make key-value pairs with the first letter as the key
.apply(ParDo.of(new FirstLetterAsKey()))
// Group the words by first letter
.apply(GroupByKey.<String, String> create())
// Get a PCollection of only the values, discarding the keys
.apply(ParDo.of(new GetValues()))
// Write the words to the database
.apply(JdbcIO.<String> writeIterable()
.withDataSourceConfiguration(
JdbcIO.DataSourceConfiguration.create(options.getJdbcDriver(), options.getURL()))
.withStatement(INSERT_OR_UPDATE_SQL)
.withPreparedStatementSetter(new WordCountPreparedStatementSetter()));
The difference with the normal write-method of JdbcIO is the new method writeIterable() that takes a PCollection<Iterable<RowT>> as input instead of PCollection<RowT>. Each Iterable is written as one batch to the database.
The version of JdbcIO with this addition can be found here: https://github.com/olavloite/beam/blob/JdbcIOIterableWrite/sdks/java/io/jdbc/src/main/java/org/apache/beam/sdk/io/jdbc/JdbcIO.java
The entire example project containing the example above can be found here: https://github.com/olavloite/spanner-beam-example
(There is also a pull request pending on Apache Beam to include this in the project)

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

Create n agents and calculate average number

I want to create system of n agents. All agents are generate random Integer value. My goal is calculating average of these n numbers.
My simple idea of algorithm:
Every Agent sends message with its number to other agents
Every Agent calculates average number
Problems:
I just can't understand how I can create a variable number of agents
How I can take output result
Maybe somebody know how I can do this?
The examples online tend to focus on using the Boot class:
java -cp jade.jar jade.Boot -agents agentName:org.agents.MyAgentClass
You could spawn more agents simply by adding more to the -agents option command-line args (separated by semi-colons):
java -cp jade.jar jade.Boot -agents \
agent1:org.agents.MyAgentClass;agent2:org.agents.MyAgentClass
If you need a variable number of agents, you could move this to a bash script that appends more agents depending on a parameter.
If you really want to go crazy, you can create your own container and add agents to it from your own code and bypass the Boot class. Since your use case is so simple, I don't know that this would be a good way to go yet.

Windows Phone Background Agents

Question on agents: I specifically want to create a Periodic Task, but only want to run it once every day, say 1am, not every 30 minutes which is the default. In the OnInvoke, do I simply check for the hour, and run it only if current hour matches that desired hour.
But on the next OnInvoke call, it will try to run again in 30 minute, maybe when it's 1:31am.
So I guess I'd use a stored boolean in the app settings to mark as "already run for today" or similar, and then check against that value?
If you specifically want to run a custom action at 1 am, i'm not sure that a single boolean would be enough to make it work.
I guess that you plan to reset your boolean at 1:31 to prepare the execution of the next day, but what if your periodic task is also called at 1h51 (so called more than 2 times between 1am and 2am).
How could this happen? Well maybe this could happen if the device is reboot but i'm not quiet sure about it. In any case, storing the last execution datetime somewhere and comparing it to the current one can be a safer way to ensure that your action is only invoked once per day.
One question remains : Where to store your boolean or datetime (depending which one you'll pick)?
AppSetting does not seem to be a recommanded place according msdn :
Passing information between the foreground app and background agents
can be challenging because it is not possible to predict if the agent
and the app will run simultaneously. The following are recommended
patterns for this.
For Periodic and Resource-intensive Agents: Use LINQ 2 SQL or a file in isolated storage that is guarded with a Mutex. For
one-direction communication where the foreground app writes and the
agent only reads, we recommend using an isolated storage file with a
Mutex. We recommend that you do not use IsolatedStorageSettings to
communicate between processes because it is possible for the data to
become corrupt.
A simple file in isolated storage should get the job done.
If you're going by date (once per day) and it's valid that the task can run at 11pm on a day and 1am the next, then after the agent has run you could store the current date (forgetting about time). Then whenever the agent runs again in 30 minutes, check if the date the task last ran is the same as the current date.
protected override void OnInvoke(ScheduledTask task)
{
var lastRunDate = (DateTime)IsolatedStorageSettings.ApplicationSettings["LastRunDate"];
if(DateTime.Today.Subtract(lastRunDate).Days > 0)
{
// it's a greater date than when the task last ran
// DO STUFF!
// save the date - we only care about the date part
IsolatedStorageSettings.ApplicationSettings["LastRunDate"] = DateTime.Today;
IsolatedStorageSettings.ApplicationSettings.Save();
}
NotifyComplete();
}