Regarding dom4j,iCal4j and backport-util-concurrent Export Control Classification Number (ECCN) - dom4j

We would like know below mentioned details to use dom4j,iCal4j and backport-util-concurrent in commercial product
Can anyone tell me if the Java code contains encryption - or even better
can anyone tell me what will be export code (ECCN) for dom4j,iCal4j and backport-util-concurrent?
can anyone tell me what export code (ECCN) to use when distributing product with dom4j,iCal4j and backport-util-concurrent?
more info on ECCN Ref: http://en.wikipedia.org/wiki/Export_Control_Classification_Number
With Regards,
Kasim Basha Shaik

iCal4j ECCN is n/a(not applicable).since,ical4j is not developed in the US so I don't believe it is subject to export restrictions. Either way there is not really any encryption code in ical4j, with the only encoding being BASE64 encoding of binary values.
(above information is provided by the "Ben" creator of iCal4j URL here )
In both dom4j source from here and
backport-util-concurrent source from here
I scanned through the code for the following key words.
- AlgorithmParameters
- CertificateFactory
- CertPathBuilder
- CertPathValidator
- CertStore
- Cipher
- AES
- DES
- DESede
- RSA
- KeyFactory
- KeyGenerator
- Hmac
- KeyPairGenerator
- KeyStore
- Mac
- MessageDigest
- SecretKeyFactory
- Signature
- TransformService
- XMLSignatureFactory
Encryption related code not found and above encryption key word are taken from here
form the above code scan,I came a conclusion that ECCN for dom4j and backport-util-concurrent is n/a

Related

AllenNLP BERT SRL input format ("OntoNotes v. 5.0 formatted")

The goal is to train BERT SRL on another data set. According to configuration, it requires conll-formatted-ontonotes-5.0.
Natively, my data comes in a CoNLL format and I converted it to the conll-formatted-ontonotes-5.0 format of the GitHub edition of OntoNotes v.5.0. Reading the data works and training seems to work, except that precision remains at 0. I suspect that either the encoding of SRL arguments (BOI or phrasal?) or the column structure (other OntoNotes editions in CoNLL format differ here) differ from the expected input. Alternatively, the error may arise because if the role labels are hard-wired in the code. I followed the reference data in using the long form (ARGM-TMP), but you often see the short form (AM-TMP) in other data.
The question is which dataset and format is expected here. I guess it's one of the CoNLL/Skel formats for OntoNotes 5.0 with a restored WORD column, but
The CoNLL edition doesn't seem to be shipped with the LDC edition of OntoNotes
It does not seem to be the format of the "conll-formatted-ontonotes-5.0" edition of OntoNotes v.5.0 on GitHub provided by the OntoNotes creators.
There is at least one other CoNLL/Skel edition of OntoNotes 5.0 data as part of PropBank. This differs from the other one in leaving out 3 columns and in the encoding of predicates. (For parts of my data, this is the native format.)
The SrlReader documentation mentions BIO (IOBES) encoding. This has been used in other CoNLL editions of PropBank data, indeed, but not in the above-mentioned OntoNotes corpora. Other such formats are the CoNLL-2008 and CoNLL-2009 formats, for example, and different variants.
Before I start reverse-engineering the SrlReader, does anyone have a data snippet at hand so that I can prepare my data accordingly?
conll-formatted-ontonotes-5.0 version of my data (sample from EWT corpus):
google/ewt/answers/00/20070404104007AAY1Chs_ans.xml 0 0 where WRB (TOP(S(SBARQ(WHADVP*) - - - - * (ARGM-LOC*) * * -
google/ewt/answers/00/20070404104007AAY1Chs_ans.xml 0 1 can MD (SQ* - - - - * (ARGM-MOD*) * * -
google/ewt/answers/00/20070404104007AAY1Chs_ans.xml 0 2 I PRP (NP*) - - - - * (ARG0*) * * -
google/ewt/answers/00/20070404104007AAY1Chs_ans.xml 0 3 get VB (VP* get 01 - - * (V*) * * -
google/ewt/answers/00/20070404104007AAY1Chs_ans.xml 0 4 morcillas NNS (NP*) - - - - * (ARG1*) * * -
The "native" format is the one under of the CoNLL-2012 edition, see cemantix.org/conll/2012/data.html how to create it.
The Ontonotes class that reads it may, however, encounter difficulties when parsing "native" CoNLL-2012 data, because the CoNLL-2012 preprocessing scripts can lead to invalid parse trees. Parsing with NLTK will naturally lead to a ValueError such as
ValueError: Tree.read(): expected ')' but got 'end-of-string'
at index 1427.
"...LT#.#.) ))"
There is no direct way to solve that at the data level, because the string that is parsed is an intermediate representation, but not the original data. If you want to process CoNLL-2012 data, the ValueError has to be caught, cf. https://github.com/allenai/allennlp/issues/5410.

How to build and send an IDOC from MII to SAP ECC using IDOC_Asynchronous_Inbound

We have a custom built legacy application that collects data from a SQL server database, builds an IDOC and then "sends" that IDOC to ECC. (This application was written in VB6 and uses the SAPGUI 6 SDK to accomplish this.)
I'm attempting to decommission this solution and replace it with a solution built in MII.
As far as I can tell I need to create the IDOC in MII using IDOC_Asynchronous_Inbound but I'm stuck at how I should populate the fields required.
IDOC_Asynchronous_Inbound has two segments: IDOC_CONTROL_REC_40 and IDOC_DATA_REC_40
I guessed which fields to fill in the IDOC_CONTROL_REC_40/item segment by looking at the source code of the old VB application. I think this should do:
IDOC_INBOUND_ASYNCHRONOUS/TABLES/IDOC_CONTROL_REC_40/item
- IDOCTYP: WMMBID01
- MESTYP: WMMBXY
- SNDPRN: <value>
- SNDPRT: LI
- SNDPOR: <value>
- RCVPRN: <value>
- RCVPRT: LS
- EXPRSS: X
Looking at the source code of the old VB app, I should now add a segment of type E1MBXYH with the following fields filled:
- BLDAT: <date>
- BUDAT: <date>
- TCODE: MB31
- XBLNR: <value>
- BKTXT: <value>
Based on guesswork and some blog posts, I'm guessing I have to add this segment as an item segment to the IDOC_DATA_REC_40 segment.
My guess is I should then add item segments of type E1MBXYI for all of the 'records' I'd like to send to SAP with the following fields:
- MATNR: <value>
- WERKS: <value>
- LGORT: <value>
- CHARG: <value>
- BWART: 261
- ERFMG: <value>
- SHKZG: H
- ERFME: <value>
- AUFNR: <value>
- SGTXT: <value>
Now, looking at the IDOC_DATA_REC_40 segment in MII, these are the fields that are available:
- SEGNAM
- MANDT
- DOCNUM
- SEGNUM
- PSGNUM
- HLEVEL
- SDATA
My guess is that the segment name should go into SEGNAM and the data (properly structured/spaced) should go into SDATA. I'm not sure what I should put in the other fields (if anything). (I have the description file for this IDOC type so I know how to 'structure' the data I have to put in the SDATA segment... counting spaces, yay!)
To hopefully clarify how the IDOC should be structured, this is a (link to a) screenshot of an IDOC posted by the current VB application:
screenshot of an IDOC in SAP showing the data structure
I hope someone here can confirm I'm on the right track in filling the segments and that there's someone who knows which fields I should fill in the data segments.
Kind regards,
Thomas
P.S. Some of the resources consulted:
How to create and send Idocs to SAP using SAP .Net Connector 3
Goods movement IDOC SAP documentation
How to send IDOCs from SAP MII to SAP ERP
P.P.S. Full disclosure: I've also posted this question on the SAP Community Questions & Answers board.
Correctly dealing with SAP IDocs is unfortunately not so easy as it looks at first glance. Maybe it would be a good idea to have a look at the SAP Java IDoc Class Library as mentioned here:
SAP .Net Connector 3.0 - How can I send an idoc from a non-SAP system?
Even if you would not like to switch to Java, it could be at least used as a reference example implementation in order to see how the Remote Function Modules have to be filled with the IDoc data to send.
The SAP Java IDoc Class Library can be downloaded together with the SAP Java Connector from here.
I have no MII system by my side but you'd better thoroughly examine IDoc documentation rather than read the tea leaves. It can contain helpful hints how to fill one or another field of segment.
Go to WE60 and enter your segment names (IDOC_CONTROL_REC_40/IDOC_DATA_REC_40) or IDoc definition name IDOC_Asynchronous_Inbound.
It may not be very helpful but better than nothing.

Config File Checksum guessing (CRC)

I'm currently "hacking" an old 3d Printer, built in 1996. There is Software running on an old Windows PC. I need to modify some parameters which are not accessible from the front end, so I wanted to modify the config files. But if I modify something, it could not be read anymore. I noticed, that there is a checksum at the end of the file, and I'm not really an checksum expert. I assume that, while loading the file, this checksum is calculated again and compared to the one at the end.
I'm having trouble finding out which checksum algorithm is used.
What I already found out: I think it's not just an addition of the bits in the file. When I'm switching two characters, an checksum, that is generated with addition, would not change. But the software won't take that file.
I'm guessing its some kind of CRC16, because a checksum looks like that:
0x4f20
As I have calculated that number with several usual CRC16 parameters and could not find a match with the "4f20", I assume that it must be an custom CRC16..
Here is a complete sample file:
PACKET noname
style 502
last_modified 1511855084 # Tue Nov 28 08:44:44 2017
STRUCTURE MACHINE_OVRL
PARAM distance_units
Value = "millimeters"
ENDPARAM
PARAM language
Value = "English"
ENDPARAM
ENDSTRUCTURE
ENDPACKET
checksum 0x4f20
I think either the checksum itself or the complete line "checksum 0x4f20" is not being considered while calculated, because thats not possible (?)
Any help is appreciated.
Edit: I got some more files with checksums of course, but these are a lot longer than this file. If needed, I could provide them too..
RevEng was written for this purpose. Given several examples of the input and the associated CRCs, RevEng will derive the CRC parameters. If it is a CRC.

How to decipher the structure of this mysql data?

I have this mysql data field:
a:3:{s:13:"bank_transfer";a:5:{s:13:"orders_prefix";s:2:"BT";s:6:"status";s:1:"1";s:9:"long_name";s:13:"Bank transfer";s:9:"surcharge";s:1:"0";s:13:"email_message";s:13:"Bank transfer";}s:16:"cash_on_delivery";a:5:{s:13:"orders_prefix";s:3:"COD";s:6:"status";s:1:"1";s:9:"long_name";s:16:"Cash on delivery";s:9:"surcharge";s:1:"6";s:13:"email_message";s:16:"Cash on delivery";}s:6:"paypal";a:10:{s:13:"orders_prefix";s:6:"PAYPAL";s:6:"status";s:1:"1";s:9:"long_name";s:21:"Paypal / Credit cards";s:9:"surcharge";s:1:"0";s:13:"currency_code";s:3:"USD";s:6:"region";s:2:"US";s:7:"sendbox";s:1:"1";s:5:"email";s:26:"californiadriven#gmail.com";s:13:"payment_limit";s:4:"8000";s:13:"email_message";s:20:"Credit Card / Paypal";}}
That I need to modify to include authorize.net payment method. Anyone know what kind of structure/format or w/e the above data is?
Nvm i figured it out:
a:# - is for number of array elements.
s:# - is for number of string characters.

How to retrieve useful system information in java?

Which system information are useful - especially when tracing an exception or other problems down - in a java application?
I am thinking about details about exceptions, java/os information, memory/object consumptions, io information, environment/enchodings etc.
Besides the obvious - the exception stack trace - the more info you can get is better. So you should get all the system properties as well as environment variables. Also if your application have some settings, get all their values. Of course you should put all this info into your log file, I used System.out her for simplicity:
System.out.println("----Java System Properties----");
System.getProperties().list(System.out);
System.out.println("----System Environment Variables----");
Map<String, String> env = System.getenv();
Set<String> keys = env.keySet();
for (String key : keys) {
System.out.println(key + "=" + env.get(key));
}
For most cases this will be "too much" information, but for most cases the stack trace will be enough. Once you will get a tough issue you will be happy that you have all that "extra" information
Check out the Javadoc for System.getProperties() which documents the properties that are guaranteed to exist in every JVM.
For pure java applications:
System.getProperty("org.xml.sax.driver")
System.getProperty("java.version")
System.getProperty("java.vm.version")
System.getProperty("os.name")
System.getProperty("os.version")
System.getProperty("os.arch")
In addition, for java servlet applications:
response.getCharacterEncoding()
request.getSession().getId()
request.getRemoteHost()
request.getHeader("User-Agent")
pageContext.getServletConfig().getServletContext().getServerInfo()
One thing that really helps me- to see where my classes are getting loaded from.
obj.getClass().getProtectionDomain().getCodeSource().getLocation();
note: protectiondomain can be null as can code source so do the needed null checks