Certificate Signing - digital-certificate

I know that in RSA algorithm a Public key is used to ecrypt data which can be decrypted using only the private key.
When a digital certificate is signed the hash of the certificate is signed using the private key of the RootCA and during validation the public key is used to verify the hash. In this case signing means encrypting. Also, sha1RSA algorithm is one of the algos used for signing a certificate.
Thus private key used for Encryption and public key used for Decrytion of the Hash ?
Is this possible using RSA or I understood wrong?

This is quite logical. Private key is known only by owner and public key is known by everyone.
When doing asynchronous encryption, it's important that everyone can produce encrypted message (by using public key), but only recipient (private key holder) will be able to read the message.
When doing digital signatures, it's important that everyone can verify signature (by using public key), but only creator (private key holder) will be able to produce it.

Related

Ways of overriding the behavior of the lexik/jwt-authentication-bundle to allow n number of public keys from an external source

Some background:
We have many applications, each with their own auth provider and public / private keypairs and their own key rotation.
When a new application is spun up or rotates its keys the public key is persisted elsewhere in a key store for other applications to pick up.
I have a Symfony 5.4 service that I want to authenticate users from these applications, the JWT provided by them includes the KID in the header, so the flow would be:
Receive request with JWT
Get KID from header
Lookup KID in our key store and load the public key
Verify that the JWT signature matches.
From them on the flow is as you would expect, Load JWSUser etc and the firewall works the way it should do.
I could just grab the key store and generate a large config file for it, but that is less than ideal at runtime and looking through the code it tries every alternative key until one verifies successfully, and that does not scale.
As far as I can see I have two options:
Extend Lexik\Bundle\JWTAuthenticationBundle\Services\JWSProvider\LcobucciJWSProvider with my own and override the verify method to go and find the right public key first.
Create my own JWSProvider that implements JWSProviderInterface and reproduce most of the logic except for how it gets public keys for verification.
Obviously of those two, #1 looks most simple, however the LcobucciJWSProvider is marked #final in the docblock even though the final keyword is not in use in the class itself, so it probably shouldn't be extended.
Am I right in thinking those are my two options?
I was initially hoping I could just implement my own keyloader but it looks like they don't ever receive information about the requested key, just if the public or private key is wanted.

Difference between IERC20 and just address

There are two ways of declaration ERC20 in other contracts:
IERC20 public token, and then connecting to it like token.transfer;
address public token, and then connecting to it like IERC20(token).transfer.
Is there any difference between these two ways of declaration? If so, what is more preferred for usage?
The only difference is during compilation, when the compiler would give you an error if you tried to use one type where the other is required.
In terms of runtime, they are both (160-bit) ethereum addresses.
In your example, it makes more sense to use the type IERC20, because that is the intended type of the variable token.

IERC20 public declaration with another address?

I am new to solidity, but checking through a specific contract I found the following line of code in the IERC20 declaration:
IERC20 public "TOKEN NAME" = IERC20("THE ADDRESS OF ANOTHER CONTRACT");
This code was found in a contract that is effectively a fork of another project, but the developers say they are unrelated. Of course, people are just FOMO into the token - I know this forum here is not for this type of discussion so I'll abstain from the same.
However, from a solidity coding perspective, why would one write this line of code directly referencing another contract address (the forked address) when making the IERC20 declaration - what does this do, is there a purpose to this?
It seems to me that this is easier and more reliable. Alternatively, you can pass this address in constructor parameters, or provide a special method to set it.
The IERC20 is an interface that defines expected functions arguments and return values.
It helps validating whether the caller is passing correct data types, amount of arguments, and helps parsing the returned data to expected types.
Let's show it on a very simple interface
interface IGame {
function play(uint256 randomNumber) returns (bool won);
}
Elsewhere in your contract, you define a variable that uses this interface
IGame game = Game("0xthe_game_address");
You can then directly call the other contract's methods defined in the interface and pass the return values to your variables.
bool didIWin = game.play(1);
The call would fail if the game contract didn't have the play method or didn't return any value (plus in few other cases).
As for why is the address hardcoded, it's probably just to simplify the development as Mad Jackal already said in their answer.
One more plausible reason in some cases is to gain more trust by showing users that the contract admins are not able to cheat you by changing the destination address (possibly to a contract made by them doing whatever they want).
Edit: If the another contract's address is really unrelated and useless (meaning, the fork is not calling it), it's probably just a human error and the developer forgot to remove it.

Google compute project-wide SSH keys with jclouds

I'm trying to launch google compute instances from java code using jclouds. It's mostly working, however I'd like to use the project-wide SSH key I've defined instead of having jclouds generate a new user/key credential.
According to the README here - https://github.com/apache/jclouds/tree/master/providers/google-compute-engine:
For an instance to be ssable one of the following must happen: 1 - the project's metadata has an adequately built "sshKeys" entry and a corresponding private key is provided in GoogleComputeEngineTemplateOptions when createNodesInGroup is called. 2 - an instance of GoogleComputeEngineTemplateOptions with an adequate public and private key is provided.
I'm trying to do 1) above. I've correctly configured the project's metadata (I can use it to connect to manually-created instances that don't have jclouds-generated credentials), but I can't work out how to provide that key to GoogleComputeEngineTemplateOptions?
Neither
GoogleComputeEngineTemplateOptions.Builder.installPrivateKey(String key) or GoogleComputeEngineTemplateOptions.Builder.overrideLoginPrivateKey(String key) seem to work.
The documentation is pretty sparse - anyone know how to do it?
jclouds will create a key by default if you don't provide one. You could use the following to provide your auth private key and tell jclouds not to generate a new one:
TemplateOptions opts = computeService.templateOptions()
.as(GoogleComputeEngineTemplateOptions.class)
.overrideLoginPrivateKey(privateKey)
.autoCreateKeyPair(false);

What if the public key is lost? Is that a security issue?

DSA or RSA has a private key and pubic key; the private must be kept safe and the public key uploaded to a host you want to access.
But what if the public key is lost, or revealed to everyone, like in a blog post? Is that a security issue ?
No, it's not an issue in the slightest: it's meant to be public anyway.
According to what you use you key for, you may even NEED it to be available to everyone (think gpg keys for signing email).
As long as your private key is safe, you've nothing to worry about.
there is no problem in everyone knowing the public key. you can share it anywhere you like - even in a blog. it is only important to keep the private key secret.