I have read this stackoverflow post:
How to create an IPFS compatible multihash
$ echo "Hello World" | ipfs add -n
$ added QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy47CnJDgvs8u QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy47CnJDgvs8u
base58
12 - 20 - 74410577111096cd817a3faed78630f2245636beded412d3b212a2e09ba593ca
<hash-type> - <hash-length> - <hash-digest>
ipfs cat
$ curl "https://ipfs.infura.io:5001/api/v0/object/data?arg=QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy47CnJDgvs8u"
Hello World
So I was wondering how does ipfs's decoding work?
Since as far as I know, sha-256 hash function is ONE-WAY hashing, right?
Basicly, IPFS is a (key, value) storage service. The multihash you get from ipfs add command is the multihash of the value, also the key to retrieve the value from IPFS service with ipfs get or ipfs object commands.
With http api of IPFS service, curl "https://ipfs.infura.io:5001/api/v0/object/data?arg=key works exactly same as ipfs object data command.
So it is not about decoding the hash, it is just get the value with you key(the multihash).
Related
I want to send a payload from a producer topic to a consumer topic. I've created the channels locally & tried sending payload on producer topic. But the payload is not received on the consumer side.
I think this could be another in JSON formatting I've tried online JSON beautifiers but this is not helping.
Although it's a very slight chance, there is a possibility that there's something wrong with the code and the producer topic is not able to receive the payload. But I'm not able to confirm this.
You'll need to show code to solve your specific problem, but here is a simple example using kcat and jq
Producing
$ kcat -P -b localhost:9092 -t example
{"hello":"world"}
{"hello":"test data"}
Consume and parse
$ kcat -b localhost:9092 -C -t example -u | jq -r .hello
world
test data
The Kafka broker will not validate your JSON. The serialization library in your client might. So your issue could be any one of the following
If your serializer failed, and you aren't catching and logging that exception
You are not sending enough data for the producer buffer to clear, and so you should call .flush() method on the producer at some point.
You have some Kafka authorization enabled on your cluster and your producer is failing to connect/produce.
Some other connection setting is wrong in your code.
First when I run ipfs --offline block stat <ipfs_hash> and if the hash does not locally exist I get following message: Error: blockservice: key not found.
Afterwards I run following: ipfs object stat <ipfs_hash> and after getting a valid output
I run ipfs --offline block stat <ipfs_hash> again , now it always return valid information (hence does not give an error) even the hash is not downloaded. So assuming if ipfs --offline block stat <ipfs_hash> gives and Error message is not correlated that the given hash is locally downloaded.
How can I resolve this in order to detect if the asked hashed is fully downloaded or not?
I can do something like ipfs refs local | grep <hash> , but I don't want to keep fetch all the hashes and it will be slower when there is hundreds of hashes exist.
Related: https://discuss.ipfs.io/t/how-to-check-is-the-given-ipfs-hash-and-its-linked-hashes-already-downloaded-or-not/7588
ipfs files stat --with-local --size <path> returns the downloaded percentage of the requested ipfs hash. If its 100.00% than we can verify that its fully downloaded into local ipfs repo.
ipfs files stat <path> - Display file status.
--with-local bool - Compute the amount of the dag that is local, and if possible the total size.
--size bool - Print only size. Implies '--format=<cumulsize>'. Conflicts with other format options. ```
$ hash="QmPHTrNR9yQYRU4Me4nSG5giZVxb4zTtEe1aZshgByFCFS"
$ ipfs files stat --with-local --size /ipfs/$hash
407624015
Local: 408 MB of 408 MB (100.00%)
I can't manage to read the data from a luasocket. If i read more than the available data, the function call keeps blocked waiting until the client decides to close.
https://github.com/StringManolo/LuaServer/blob/main/tmpServer.lua#L216
line, errorStr = clientObj:receive("*a")
I'm using this command to test:
$ curl -X POST -d "a=b" http://localhost:1337 -v
Got same problem using Chrome to send a request to the Lua server.
I tryied to read byte to byte, line to line, all, etc.
I am working with GCP KMS, and it seems that when I send a file to a GCP bucket (using gustil cp) it is encrypted.
However, I have a question related to the permission to restore that file from the same bucket, using a different service account. I mean, the service account that I am using to restore the file from the bucket, doesn't have Decrypt privilege and even so the gustil cp works.
My question is whether it's normal behavior, or if I'm missing something ?
Let me describe my question:
First of all, I confirm that the default encryption for the bucket is the KEY that I set up previously:
$ kms encryption gs://my-bucket
Default encryption key for gs://my-bucket:
projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY
Next, with gcloud config, I set a service account, which has "Storage Object Creator" and "Cloud KMS CryptoKey Encrypter" permissions:
$ gcloud config set account my-service-account-with-Encrypter-and-object-creator-permissions
Updated property [core/account].
I send a local file to the bucket:
$ gsutil cp my-file gs://my-bucket
Copying file://my-file [Content-Type=application/vnd.openxmlformats-officedocument.presentationml.presentation]...
| [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
After sending the file to the bucket, I confirm that the file is encrypted using the KMS key I created before:
$ gsutil ls -L gs://my-bucket
gs://my-bucket/my-file:
Creation time: Mon, 25 Mar 2019 06:41:02 GMT
Update time: Mon, 25 Mar 2019 06:41:02 GMT
Storage class: REGIONAL
KMS key: projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY/cryptoKeyVersions/1
Content-Language: en
Content-Length: 616959
Content-Type: application/vnd.openxmlformats-officedocument.presentationml.presentation
Hash (crc32c): 8VXRTU==
Hash (md5): fhfhfhfhfhfhfhf==
ETag: xvxvxvxvxvxvxvxvx=
Generation: 876868686868686
Metageneration: 1
ACL: []
Next, I set another service account, but this time WITHOUT DECRYPT permission and with object viewer permission (so that it be able to read files from the bucket):
$ gcloud config set account my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions
Updated property [core/account].
After set up the new service account (WITHOUT Decrypt permission), the gustil to restore the file from the bucket works smooth...
gsutil cp gs://my-bucket/my-file .
Copying gs://my-bucket/my-file...
\ [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
My question is whether is it a normal behavior ? Or, since the new service account doesn't have Decrypt permission, the gustil cp to restore the file shouldn't work ? I mean, it is not the idea that with KMS encryption, the 2nd gustil cp command should fail with a "403 permission denied" error message or something..
If I revoke "Storage object viewer" privilege from the 2nd service account (to restore the file from the bucket), in this case the gustil fails, but it is because it doesn't have permission to read the file:
$ gsutil cp gs://my-bucket/my-file .
AccessDeniedException: 403 my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions does not have storage.objects.list access to my-bucket.
I appreciate if someone else could give me a hand, and clarify the question....specifically I don't sure whether the command gsutil cp gs://my-bucket/my-file . should work or not.
I think it shouldn't work (because the service account doesn't have Decrypt permission), or should it work ?
This is working correctly. When you use Cloud KMS with Cloud Storage, the data is encrypted and decrypted under the authority of the Cloud Storage service, not under the authority of the entity requesting access to the object. This is why you have to add the Cloud Storage service account to the ACL for your key in order for CMEK to work.
When an encrypted GCS object is accessed, the KMS decrypt permission of the accessor is never used and its presence isn't relevant.
If you don't want the second service account to be able to access the file, remove its read access.
By default, Cloud Storage encrypts all object data using Google-managed encryption keys. You can instead provide your own keys. There are two types:
CSEK which you must supply
CMEK which you also supply, but this time is managed by Google KMS service (this is the one you are using).
When you use gsutil cp, you are already using the encryption method behind the curtains. So, as stated on the documentation for Using Encryption Keys:
While decrypting a CSEK-encrypted object requires supplying the CSEK
in one of the decryption_key attributes, this is not necessary for
decrypting CMEK-encrypted objects because the name of the CMEK used to
encrypt the object is stored in the object's metadata.
As you can see, the key is not necessary because it is already included on the metadata of the object which is the one the gsutil is using.
If encryption_key is not supplied, gsutil ensures that all data it
writes or copies instead uses the destination bucket's default
encryption type - if the bucket has a default KMS key set, that CMEK
is used for encryption; if not, Google-managed encryption is used.
I am setting up a Hyperledger Sawtooth network. In /etc/sawtooth/validator.toml.example, I saw the following:
# A Curve ZMQ key pair are used to create a secured network based on side-band
# sharing of a single network key pair to all participating nodes.
# Note if the config file does not exist or these are not set, the network
# will default to being insecure.
network_public_key = 'wFMwoOt>yFqI/ek.G[tfMMILHWw#vXB[Sv}>l>i)'
network_private_key = 'r&oJ5aQDj4+V]p2:Lz70Eu0x#m%IwzBdP(}&hWM*'
Can anybody tell me how to create another keypair?
These are the ZMQ message keys used to securely communicate with other nodes.
If you've installed sawtooth already, python3 and python3-zmq would have been already installed and available in your system. Here's an example to create the keypair in Python:
import zmq
(public, secret) = zmq.curve_keypair()
print("network_public_key =", public.decode("utf-8"),
"\nnetwork_private_key =", secret.decode("utf-8"))
Also, if you can use a compiled binary tool:
$ sudo apt-get install g++ libzmq3-dev
$ wget https://raw.githubusercontent.com/zeromq/libzmq/master/tools/curve_keygen.cpp
$ g++ curve_keygen.cpp -o curve_keygen -lzmq
$ ./curve_keygen
Copy the corresponding public key output to network_public_key and the private key output to network_private_key fields in /etc/sawtooth/validator.toml
The above was from my Sawtooth FAQ at
https://sawtooth.hyperledger.org/faq/validator/#how-do-i-generate-the-network-public-key-and-network-private-key-in-validator-toml