Checking Expiration of Document in Couchbase - couchbase

We are running Couchbase Community Edition 4.5.1, and recently went through the process of adding a TTL to documents were none previously existed. After doing so, we pulled the csv backup of the database and noticed a large number of the document are still returning an expiration of 0. We aren't sure if the issue is a failure to update, or in the data pull.
Unfortunately, Couchbase has an issue where expiration does not return in the metadata for N1QL, so we have been unable to independently confirm whether the csv is correct for any given document.
Is there another way to get the current TTL of a document, either through the Console UI or an API call?

You can use the cbc utility included in libcouchbase to fetch the TTL with --keystats. For example:
$ cbc-stats --keystats -u Administrator -P - -U couchbase://localhost/travel-sample airline
_112
Bucket password:
localhost:11210 key_is_dirty false
localhost:11210 key_exptime 0
localhost:11210 key_flags 33554432 (cbc: converted via htonl)
localhost:11210 key_cas 1503621971151421440
localhost:11210 key_vb_state active
And note that in Couchbase Server 5.0, the Sub-Document API has been enhanced so you can fetch the TTL as a virtual XATTR. For example:
$ cbc-subdoc -u Administrator -P - -U couchbase://localhost/travel-sample
Bucket password:
subdoc> get -x $document airline_112
airline_112 CAS=0x14ddf0375af40000
0. Size=188, RC=0x00 Success (Not an error)
{"CAS":"0x14ddf0375af40000","vbucket_uuid":"0x0000e976b253ad5c","seqno":"0x0000000000000001","exptime":0,"value_bytes":118,"datatype":["json"],"deleted":false,"last_modified":"1503621971"}
1. Size=118, RC=0x00 Success (Not an error)
{"callsign":"FLYSTAR","country":"United Kingdom","iata":"5W","icao":"AEU","id":112,"name":"Astraeus","type":"airline"}
subdoc> get -x $document.exptime airline_112
airline_112 CAS=0x14ddf0375af40000
0. Size=1, RC=0x00 Success (Not an error)
0
1. Size=118, RC=0x00 Success (Not an error)
{"callsign":"FLYSTAR","country":"United Kingdom","iata":"5W","icao":"AEU","id":112,"name":"Astraeus","type":"airline"}
subdoc> get -x $document.exptime -x $document.value_bytes airline_112
airline_112 CAS=0x14ddf0375af40000
0. Size=1, RC=0x00 Success (Not an error)
0
1. Size=3, RC=0x00 Success (Not an error)
118
2. Size=118, RC=0x00 Success (Not an error)
{"callsign":"FLYSTAR","country":"United Kingdom","iata":"5W","icao":"AEU","id":112,"name":"Astraeus","type":"airline"}
You can fetch these XATTRs programmatically too from an SDK which might be handy for unit tests. Documentation of these features is available.

Related

OpenSearch Installation | securityadmin.sh | UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout

We installed OpenSearch on 4 VMs(1 coordinating node, 1 master node and 2 data nodes) and according to documentation https://opensearch.org/docs/latest/opensearch/cluster/
when we login to OpenSearch URL or via curl, we are getting following msg:
e.g.
[apm#IR-APM-DEV-MN1 config]$ curl -XGET https:// :9200/_cat/plugins?v -u 'admin:admin' --insecure
OpenSearch Security not initialized.
According to it and msg we saw “[opensearch-master] Not yet initialized (you may need to run securityadmin)" , we executed securityadmin script as follows:
./securityadmin.sh -cd ../securityconfig/ -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/kirk.pem -key ../../../config/kirk-key.pem -h -cn apm-cluster-1 -arc -diagnose
And got following error msg for example:
Will update '_doc/config' with ../securityconfig/config.yml
FAIL: Configuration for 'config' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][config], source[n/a, actual length: [3.7kb], max length: 2kb]}] and a refresh]]
....
Can someone advise if any suggestions to overcome those errors? (primary shard is not active Timeout / increase max length )
Thanks,
Noam
simply we can disable the security method:
cd /path/to/opensearch-1.2.4
sudo nano config/opensearch.yml
Add the below line :-
plugins.security.disabled: true
If not try this link generate key and follow the given steps in the official.
https://opensearch.org/docs/latest/opensearch/install/tar/
Thank You.

How to read data from socket in Lua until no more data is available?

I can't manage to read the data from a luasocket. If i read more than the available data, the function call keeps blocked waiting until the client decides to close.
https://github.com/StringManolo/LuaServer/blob/main/tmpServer.lua#L216
line, errorStr = clientObj:receive("*a")
I'm using this command to test:
$ curl -X POST -d "a=b" http://localhost:1337 -v
Got same problem using Chrome to send a request to the Lua server.
I tryied to read byte to byte, line to line, all, etc.

Error in uploading data to HDFS Fiware Cosmos global Instance

I am trying to launch a cosmos instance following this document:http://fiware-cosmos.readthedocs.io/en/latest/quick_start_guide_new/index.html.
The first step is executed successfully and i got a access token using which i create a cosmos account acc. to 2nd step and receive this response:
{"organizations": [], "displayName": "varun143", "roles": [], "app_id": "45bed173b2f8482aa15b22556c057112", "isGravatarEnabled": false, "email": "manchandavishal143#gmail.com", "id": "varun143"}.
Now i follow the 3rd step i.e creating a new dir using this command:curl -X PUT "http://storage.cosmos.lab.fiware.org:14000/webhdfs/v1/user/varun143/testdir?op=MKDIRS&user.name=varun143" -H "X-Auth-token: my acess token" | python -m json.tool and get this response:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (52) Empty reply from server
No JSON object could be decoded.
then i create a testdata.txt file in my local & write some data in it as per document and run this command:curl -v -X PUT -T testdata.txt "http://storage.cosmos.lab.fiware.org:14000/webhdfs/v1/user/varun143/testdir/testdata.txt?op=CREATE&user.name=varun143" -H "Content-Type: application/octet-stream" -H "X-Auth-token: my acess token" and get this response attach in this image url:https://imgur.com/uiWU5qr which is not as per document.Now where am i wrong & how to resolve this.Also how can i access cli or GUI of this instance.Thanks in advance.
You are using the old documentation of cosmos, the current documentation is available here https://github.com/ging/fiware-cosmos/blob/master/doc/manuals/quick_start_guide_new.md. Moreover, you need to send an email with the information that you got in step 2 for that the platform administrator creates your account, you have to consider that currently, we are in summer holidays so the creation of your account maybe suffers some delays. Also is important that you be informed that the cloud platform is only for testing purposes and use only batching processing. As a personal advice, if you don't have a real big data problem, don't try to solve it using big data technologies because you are only adding complexity to solve that problem.

Ethereum Go-ethereum pending transactions

GETH VERSION
Geth
Version: 1.8.10-stable
Git Commit: eae63c511ceafab14b92e274c1b18bf1700e2d3d
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.10.1
Operating System: linux
GOPATH=/home/myuser/go
GOROOT=/usr/lib/go-1.10
Node running under:
geth --testnet --rpc --rpcapi "eth,net,web3,personal,parity" --syncmode="light"
Problem 1:
When I tried to run out Node with --syncmode="full" or --syncmode="fast", CurrentBlock is always behind then HighestBlock, Approximately 64 blocks. So Node is running under --syncmode="light".
My Goal is to find all pending transactions on my accounts.
Steps to reproduce
eth.getBlock('pending').transactions
["0x2e6d5273fa29e892313166b8de458793fb0728f13a9077ab2295c1dc2371529c", "0xcc2e659ea3f8b6f6c1b812d559198427b0b2adf0316213c903e08c277384a1c6", "0x6a194f095f3b9ee65fa2eb9765617edda8ea99c2f8ad3e09d03d61735acd3a34", "0x604f53727f6ad056d82f57ce07b4e28cfae16c098dca909bffeaa51fb3584843"]
Curl eth_getTransactionByBlockHashAndIndex
curl -X POST -H "Content-Type: application/json" --data'{"id":8,"jsonrpc":"2.0","method":"eth_getTransactionByBlockHashAndIndex","params":["0xc0a9a6075081add64ac2f69b52f40de7b3d726281fc00a9ab23f90c892ae3346", "0x0"]}' http://localhost:8545
It returns:
{ "jsonrpc":"2.0",
"id":8,
"result":
{"blockHash":"0xc0a9a6075081add64ac2f69b52f40de7b3d726281fc00a9ab23f90c892ae3346",
"blockNumber":"0x344c3b",
"from":"0x40e0b46c7a461c02ab6e70d5536e23a9d727f9f8",
"gas":"0x927c0",
"gasPrice":"0x218711a00",
"hash":"0x2e6d5273fa29e892313166b8de458793fb0728f13a9077ab2295c1dc2371529c",
"input":"0xfe6362ae000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000000c3332353136303935323130370000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c3037363832333538353537300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002e516d54755334664370664531563444784c547079507074596874664845446f5734774c745159675837375a376378000000000000000000000000000000000000",
"nonce":"0x161",
"to":"0xabe486e0ad5319d8047d5ef83e8c1cb1dce0d8c5",
"transactionIndex":"0x0","value":"0x0","v":"0x2a",
"r":"0xd1c106a22480e173784267c4da3db1707e2efd7598d9c55c6e060842d8e42390",
"s":"0x15786e1f7f4bd53e402d4911b0334b38973609415868687f171501b64770331e"}}
it works perfect, Now lets ask for this transaction using getTransactionByHash
Now lets check eth_getTransactionByHash:
curl -X POST -H "Content-Type: application/json" --data '{"id":8,"jsonrpc":"2.0","method":"eth_getTransactionByHash","params":["0x2e6d5273fa29e892313166b8de458793fb0728f13a9077ab2295c1dc2371529c"]}' http://localhost:8545
It returns
{"jsonrpc":"2.0","id":8,"result":null}
I have to get the same result, But i got null!
Any idea? Or any suggestion of another way how to get incoming pending transactions?
You can always check for transaction details using getTransactionReceipt.
Could you please try this and get back if you do get null sometimes?
Also regarding your question on getting pending transactions, mind you these pending transactions are not for your account or you node (if you haven't posted any). Since you are connected to the public version of Ethereum, you are getting these transactions which are some later point in time picked up by some miner and getting added to the ledger. For the answer, you might write a backend which saves the pending transaction data by polling it continuously and hence serve your purpose. I hope I have answered your question.

MongoDB import of geojson data fails

I'm trying to import some GeoJson data into MongoDB. The entire file is about 24MB, so in theory the per-document limit of 16MB shouldn't be exceeded. But it looks like it's complaining about the size. I have tried solutions offered here, but none seems to work. I type the command:
mongoimport -d userdata -c countries < countries.geojson
and I get
2017-11-17T01:09:29.561+0400 connected to: localhost
2017-11-17T01:09:31.055+0400 num failures: 1
2017-11-17T01:09:31.055+0400 Failed: lost connection to server
2017-11-17T01:09:31.055+0400 imported 0 documents
and the mongod logs show (after backtrace):
2017-11-17T01:09:31.055+0400 I - [conn153] AssertionException handling request, closing client connection: 10334 BSONObj size: 17756597 (0x10EF1B5) is invalid. Size must be between 0 and 16793600(16MB) First element: insert: "countries"
2017-11-17T01:09:31.055+0400 I - [conn153] end connection 127.0.0.1:61806 (2 connections now open)
I have tried
mongoimport -d userdata -c countries < countries.geojson --batchSize 1
and
mongoimport -d userdata -c countries -j 4 < countries.geojson
based on other similar answers but got the same result, with the same response and logs.
Anyone have clues as to what's going on here? Should I break the GeoJson into two and give that a shot? I thought the 16MB limit was on individual documents, not collections or collection imports.