Is there an alternative to add_header_lines for flextable? - flextable

I am trying to add a simple header to my flextable objects. When I was looking at the following vignette, I found a description of the add_header_lines function. However, when I tried this on my own table, I got an error telling me that the function does not exist. Is there a simple alternative to this function? I know that there is the add_header function, though it seems this requires column keys to work (and that the header must be set for each column key and then merged), which is a bit tedious for data sets with many columns.
I was wondering if there was a good alternative to add_header_lines? Or is the only way to add a header with add_header and column keys?

The Reproters package is deprecated from R 3.6 version. Try Officer or other packages to work on the same.

Related

jsonPath - problem with accessing to field by index

I trying access to value Carrier by taking value from shipping_id. I testing queries at https://www.jsonquerytool.com/. If I type key by hand $.shipping_methods["11"] or $.shipping_methods.11I receive correct result ["Carrier"]. But I have problem with taking key value from shipping_id field. I was trying with many variations of this $.shipping_methods[$.shipping_id] but without success. It's possible with pure jsonPath?
{
"shipping_id":"11",
"shipping_methods":{
"10":"Post",
"11":"Carrier"
}
}
Depending on the JSON-Path implementation/environment you are using this may or may not be possible. This is because the feature you are asking for is not of the proposed standard, though some libraries have features that enable queries like that, e.g. in JSONPath-Plus you could use #property and #parent (I had no success using #root) - but those are 'extensions':
$.shipping_methods[?(#property == #parent.shipping_id)]
You can test this online here.
The page you have linked is using JSPath under the hood, and I cannot see any of the required features mentioned in the readme. It would be simpler to drill down in a general programming language that hosts the JSON-Path engine but not sure if this is an option here.

Liquibase - Trim whitespaces from CSV

I have a formatted CSV file for <loadData .../> of Liquibase.
There are some whitespaces for having a nice look.
But because of that whitespaces, I have wrong data in my DB.
How to solve it? Is there any "flag" or something for forcing Liquibase to trim whitespaces?
I tried to make it looking something like the next
id;name ;surname
1 ;test123;test123
2 ;test1 ;test123
3 ;"test" ;test123
Anyway, my DB contains test1__ and test"_ as well where _ is a space.
Also quotchar=""" didn't help (and it was expected, it is a redundant line).
Btw, id column which is defined as numeric - ok (1,2,3, etc with no errors).
Check out this Jira issue.
To quote Nathan Voxland:
It probably makes sense to keep the default as trimming since I think
that will cause less surprises. However, I added a global
configuration flag that lets you change the default.
You can set it either through a liquibase.trimCsvWhitespace=false
system property or by using the
LiquibaseConfiguration.getInstance().getProperty(GlobalConfiguration.class,
GlobalConfiguration.CSV_TRIM_WHITESPACE).setValue() API call.
Try adding liquibase.trimCsvWhitespace=falseproperty.
On further review, it looks like it was a change just in 3.5.0. I
usually try to keep backwards compatibility, even when it is
unexpected behavior but was thinking it had changed with 3.4.0 and so
changing it back to preserving whitespace would break other people
that are now expecting it to be trimming.
However, since it did change unexpectedly in 3.5.0 only, it is
definitely a bug and so I'm just setting the logic back to preserving
whitespace.
Accodring to Jira ticket this bug was fixed in liquibase version 3.5.1, but looks like it actually wasn't.

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

Using $skip with the SharePoint 2013 REST API

Forgive me, I'm very new to using REST.
Currently I'm using SP2013 Odata (_api/web/lists/getbytitle('<list_name>')/items?) to get the contents of a list. The list has 199 items in it so I need to call it twice and each time ask for a different set of items. I figured I could do this by calling:
_api/web/lists/getbytitle('<list_name>')/items?$skip=100&$top=100
each time changing however many I need to skip. The problem is this only ever returns the first 100 items. Is there something I'm doing wrong or is $skip broken in the OData service?
Is there a better way to iterate through REST calls, assuming this way doesn't work or isn't practical?
I'm using the JSon protocol with the Accept Header equaling application/json;odata=verbose
I suppose the $top=100 isn't really necessary
Edit: I've looked it up more and, I'm not entirely sure of the terms here, but using $skip works fine if you're using the method introduced with SharePoint 2010, i.e., _vti_bin/ListData.svc/<list_name>?$skip=100
Actually, funny enough, the old way doesn't set a 100 item limit on returns. So skip isn't even necessary. But, if you'd like to only return a certain segment of data, you'd have to do something like:
_vti_bin/ListData.svc/<list_name>?$skip=x&$top=(x+y)
where each time through the loop you would have something like x+=y
You can either use the old method which I described above, or check out my answer below for an explanation of how to do this using SP2013 OData
Alright, I've figured it out. $skip isn't a command which is meant to be used at the items? level. It works only at the lists? level. But, there's a way to do this, actually much easier than what I wanted to do.
If you just want all the data
In the returned data, assuming the list you are calling holds more than 100 items, there will be a __next at d/__next (assuming you are using json). This __next (it is a double underscorce, keep that in mind. I had a few problems at first because I was trying to get d/_next which never returned anything) is the right URL to get the next set of items. __next will only ever be a value if there is another set of items available to get.
I ended up creating a RequestURL variable which was initially set to to original request, but was changed to d/__next at the end of the loop. Then the loop went and checked if the RequestURL was not empty before going inside the loop.
Forgive my lack of code, I'm using SharePoint Designer 2013 to make this, and the syntax isn't horribly descriptive.
If you'd only like a small set of data
There's probably a few situations where you would only want x amount of rows from your list each time you go through the loop and that's real easy to do as well.
if you just add a $top=x parameter to your request, the __next URL that comes back with the response will give you the next x rows from your list. Eventually when there are no rows left to return __next won't be returned with the response.
Don't forget that in order to use __next you need to have a
$skiptoken=Paged=TRUE
in the url as well.

Ada Ada.Containers Clear Procedure Problem

Has anyone had trouble with the Clear procedure found in the Ada.Containers package? It seems to set the Container's length to zero, but once another element is added using the Append procedure, the contents of the Container reappear (i.e. they never get removed). I've tried both Ada.Containers.Doubly_Linked_Lists and Ada.Containers.Vectors. Both Containers have the same behavior. Any thoughts?
It sounds to me like you found a bug in your compiler's implementation of that package. I'd report it.
I figured it out. Silly Ada. You have to be careful how you reference data. Ada likes to return copies of data instead of references to it.