Thales Payshield command "JS" - hsm

Does anyone knows format of the "JS" command (ARQC Verification and/or ARPC Generation - CUP) in license HSM9-LIC031.
The "JS" command is China Union Pay commands.
I am using HSM, but i don't know the command format.
The command looks like Command Code 2A JS.

You can read more about JS command in Thales HSM documentation
payShield 9000 v2.3b
Host Command Reference Manual Addendum for Optional License LIC031 (CUP Commands)
pages 4,5
Also in that section, you can find all about the format of JS command.

Related

ipxe uefi boot NBP filesize is 0 bytes

I want to install the system using IPXE in UEFI mode and embed scripts using ipXE.efi.The following:
[root#localhost src]# cat uefi.ipxe
#!ipxe
dhcp
chain http://192.168.222.1/ipxeboot/menu.ipxe
compile:
make bin-x86_64-efi/ipxe.efi EMBED=uefi.ipxe
The installation error is as follows:
enter image description here
I think you need to provide some more context.
I assume you want to load a ipxe script over http on 192.168.222.1. So you have to run a http server on this ip which provides a ipxe script. This ipxe scripts has to provide instructions where to load the os.
I wrote a tutorial to achive this here:
https://medium.com/#peter.bolch/how-to-netboot-with-ipxe-6a41db514dee

400-unknown or invalid client_id for forge-bim360-data.connector.dashboard

I have tried to implement - https://github.com/Autodesk-Forge/forge-bim360-data.connector.dashboard
I have updated this part - npm install set FORGE_CLIENT_ID=<<YOUR CLIENT ID FROM DEVELOPER PORTAL>> set FORGE_CLIENT_SECRET=<<YOUR CLIENT SECRET>> set FORGE_CALLBACK_URL=<<your callback url of Forge e.g. http://localhost:3000/oauth/callback>> set DC_CALLBACK_URL=<<"your ngrok address here: e.g. http://abcd1234.ngrok.io/job/callback">>
I am getting the error that 400-Unknown or invalid client_id
Firstly, I rarely used Windows OS now. I simply copied the guideline of setting environment variables from other samples, while most time, I tried with debug mode (setting environment variables in launch.json) .
checking the Readme again, I found the wording is:
Windows (use Node.js command line from Start menu)
i.e. it asks to input those commands to command line of Node.js, instead of terminal of VSCode! That is why it always reports the error of client id is not defined because the variables are not set to environment at all.
The correct way is to open the command line of Node.js, and run the commands. This is a screenshot.

Github Action Run - Security import is showing "One or more parameters passed to a function were not valid"error

I built the input file (decoded base64 file into p12 file) as CERTIFICATE_PATH, P12_PASSWORD is password in secret, KEYCHAIN_PATH is defined. when I run the command on CLI, I get "1 item imported" success message. but when I run from *.yml file on GitHub action, I get "security: SecKeychainItemImport: One or more parameters passed to a function were not valid." error. any suggestions?
security import $CERTIFICATE_PATH -P $P12_PASSWORD -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
CERTIFICATE_PATH - file that contains cert.p12 data,
KEYCHAIN_PATH is TEMP/app-signing.keychain-db
Another reason in Github actions could be that you are using the wrong environment.
Take a look at this ---> Difference between Github's "Environment" and "Repository" secrets?.
Set the right environment:
environment: production
found the issue.. was passing wrong cert file.. once added correct file in the security build , was able to get it working

How to import/load/run mysql file using golang?

I’m trying to run/load sql file into mysql database using this golang statement but this is not working:
exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}", "<", file abs path )
But when i use following command in windows command prompt it’s working perfect.
mysql -u {username} -p{db password} {db name} < {file abs path}
So what is the problem?
As others have answered, you can't use the < redirection operator because exec doesn't use the shell.
But you don't have to redirect input to read an SQL file. You can pass arguments to the MySQL client to use its source command.
exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}",
"-e", "source {file abs path}" )
The source command is a builtin of the MySQL client. See https://dev.mysql.com/doc/refman/5.7/en/mysql-commands.html
Go's exec.Command runs the first argument as a program with the rest of the arguments as parameters. The '<' is interpreted as a literal argument.
e.g. exec.Command("cat", "<", "abc") is the following command in bash: cat \< abc.
To do what you want you have got two options.
Run (ba)sh and the command as argument: exec.Command("bash", "-c", "mysql ... < full/path")
Pipe the content of the file in manually. See https://stackoverflow.com/a/36383984/8751302 for details.
The problem with the bash version is that is not portable between different operating systems. It won't work on Windows.
Go's os.exec package does not use the shell and does not support redirection:
Unlike the "system" library call from C and other languages, the os/exec package intentionally does not invoke the system shell and does not expand any glob patterns or handle other expansions, pipelines, or redirections typically done by shells.
You can call the shell explicitly to pass arguments to it:
cmd := exec.Command("/bin/sh", yourBashCommand)
Depending on what you're doing, it may be helpful to write a short bash script and call it from Go.

Apache Drill with Kerberos

Does anyone know how to enable kerberos with Apache Drill? Is it possible. I can't seem to find any documentation on it, or any questions/answers floating around with the information on it. I am currently running a CDH cluster.
I am getting this error when trying to use HDFS with Drill:
Error: PERMISSION ERROR: SIMPLE authentication is not enabled. 
Available:[TOKEN, KERBEROS]
HDFS + Kerberos integration isn't currently supported / tested / documented. Vote on this ticket to track when it becomes available:
https://issues.apache.org/jira/browse/DRILL-3584
There isn't any documentation that the Drill team provides about how to enable kerberos and they haven't tested kerberos with Drill. Drill Eng. does believe that it should work.
In order to gain access onto the cluster once Kerberized, you must configure certain files in order to gain access.
Make an HDFS Superuser account as indicated in this Cloudera doc. On the Main Node, run
•sudo kadmin.local
In addition, add an 'hdfs' principal with this command
•addprinc hdfs#LOCALDOMAIN -- Where localdomain is the principal name
In order to enable authentication with Kerberos, we also need to copy the file hadoop-yarn-api.jar into Drill's class path. Example given below
•cp /opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/hadoop/client/hadoop-yarn-api.jar ~/apache-drill/jars/
The above step and the three following must be performed on each node of the cluster that an Apache Drill is installed.
Next, Drill's conf/core-site.xml file should be edited to contain the following snippet of xml. You might have to copy this file from /etc/hadoop/conf.cloudera.yarn/core-site.xml, etc or a similar path.
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
After this step, you will also need to add the following xml snippet below to the drill core-site.xml file. In this instance, hdfs/_HOST#LOCALDOMAIN is my principal property. The property can be found on the hdfs-site.xml file
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST#LOCALDOMAIN</value>
</property>
All that is left to do is create an 'hdfs' Kerberos ticket for the user that we're logged into
•kinit hdfs -- hdfs is the super user
Then start up each of the drillbits
•/opt/apachedrillfolder/bin/Drillbit.sh start
So now, Drill has both the configuration and the authority to use our kerberized HDFS store. Give it a shot by opening up a Drill prompt (drill-conf) and trying a query