Couchbase strange view behavior with keys with zero - couchbase
I hava a database, that contains chat messages between users (1 message copy per user: 1 message for sender and one for recepient). Running Couchbase server 2.0 cluster with 2 machines (1 linux and 1 windows).
I use such map function to retreive messages betwen users:
function (doc) {
if (doc.type == "msg"){
emit([doc.OwnerId, doc.SndrId, doc.Date], {"Date":doc.Date, "OwnerId":doc.OwnerId, "RcptId":doc.RcptId, "SndrId":doc.SndrId, "Text":doc.Text, "Unread":doc.Unread, "id": doc.id, "type":doc.type});
emit([doc.OwnerId, doc.RcptId, doc.Date], {"Date":doc.Date, "OwnerId":doc.OwnerId, "RcptId":doc.RcptId, "SndrId":doc.SndrId, "Text":doc.Text, "Unread":doc.Unread, "id": doc.id, "type":doc.type});
}
}
So if I try to receive some messages (ordered decending by date) from the beggining I used such startkey & endkey:
startkey=[1,2,{}]
endkey=[1,2,0]
But in this way I get not all messages. To get all messages I need to use
startkey=[1,2,{}]
endkey=[1,2]
Here is an example. For key with zero:
{"total_rows":1106,"rows":[
{"id":"msg_8aaca454-5580-4e49-a081-918d8eaba9d6","key":[8,75,1342200837278],"value":{"Date":1342200837278,"OwnerId":8,"RcptId":8,"SndrId":75,"Text":"02","Unread":false,"id":"8aaca454-5580-4e49-a081-918d8eaba9d6","type":"msg"}},
{"id":"msg_49417551-bdc9-477b-b8c2-1f36051bb930","key":[8,75,1342199880920],"value":{"Date":1342199880920,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"4","Unread":false,"id":"49417551-bdc9-477b-b8c2-1f36051bb930","type":"msg"}},
{"id":"msg_2724f077-1e76-4fbc-9a4b-f34cb71e2db0","key":[8,75,1342108023448],"value":{"Date":1342108023448,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"55","Unread":false,"id":"2724f077-1e76-4fbc-9a4b-f34cb71e2db0","type":"msg"}},
{"id":"msg_9cc91327-4ba3-45b5-ab64-f2ca63510e8d","key":[8,75,1341413650113],"value":{"Date":1341413650113,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"3","Unread":false,"id":"9cc91327-4ba3-45b5-ab64-f2ca63510e8d","type":"msg"}},
{"id":"msg_9a386663-8a2b-42d9-ae30-0634a98fe574","key":[8,75,1341413648335],"value":{"Date":1341413648335,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"1","Unread":false,"id":"9a386663-8a2b-42d9-ae30-0634a98fe574","type":"msg"}}
]
}
With keys without zero:
{"total_rows":1106,"rows":[
{"id":"msg_dc4b7758-1f0e-491c-a80d-bf3124dc0ad7","key":[8,75,1342200856186],"value":{"Date":1342200856186,"OwnerId":8,"RcptId":8,"SndrId":75,"Text":"03","Unread":false,"id":"dc4b7758-1f0e-491c-a80d-bf3124dc0ad7","type":"msg"}},
{"id":"msg_8aaca454-5580-4e49-a081-918d8eaba9d6","key":[8,75,1342200837278],"value":{"Date":1342200837278,"OwnerId":8,"RcptId":8,"SndrId":75,"Text":"02","Unread":false,"id":"8aaca454-5580-4e49-a081-918d8eaba9d6","type":"msg"}},
{"id":"msg_b2fe9ca0-aa28-41a6-ab3c-09ece6a5e14a","key":[8,75,1342200787811],"value":{"Date":1342200787811,"OwnerId":8,"RcptId":8,"SndrId":75,"Text":"01","Unread":true,"id":"b2fe9ca0-aa28-41a6-ab3c-09ece6a5e14a","type":"msg"}},
{"id":"msg_49417551-bdc9-477b-b8c2-1f36051bb930","key":[8,75,1342199880920],"value":{"Date":1342199880920,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"4","Unread":false,"id":"49417551-bdc9-477b-b8c2-1f36051bb930","type":"msg"}},
{"id":"msg_0d7dc822-b76c-42f9-87ba-3dc486100526","key":[8,75,1342180778835],"value":{"Date":1342180778835,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"Gcgjvv6gvvbnbvvbh B2B bb cb B2B","Unread":false,"id":"0d7dc822-b76c-42f9-87ba-3dc486100526","type":"msg"}},
{"id":"msg_6b26a65f-68e1-4728-87f1-59ea5e0f0c5d","key":[8,75,1342114611144],"value":{"Date":1342114611144,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"546546546546546546546546546546","Unread":false,"id":"6b26a65f-68e1-4728-87f1-59ea5e0f0c5d","type":"msg"}},
{"id":"msg_f89b09ac-ccf1-4958-85c0-259b0b68752b","key":[8,75,1342108583566],"value":{"Date":1342108583566,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"123","Unread":false,"id":"f89b09ac-ccf1-4958-85c0-259b0b68752b","type":"msg"}},
{"id":"msg_2724f077-1e76-4fbc-9a4b-f34cb71e2db0","key":[8,75,1342108023448],"value":{"Date":1342108023448,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"55","Unread":false,"id":"2724f077-1e76-4fbc-9a4b-f34cb71e2db0","type":"msg"}},
{"id":"msg_9cc91327-4ba3-45b5-ab64-f2ca63510e8d","key":[8,75,1341413650113],"value":{"Date":1341413650113,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"3","Unread":false,"id":"9cc91327-4ba3-45b5-ab64-f2ca63510e8d","type":"msg"}},
{"id":"msg_847b2901-95e3-49b9-8756-e495872558a8","key":[8,75,1341413649161],"value":{"Date":1341413649161,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"2","Unread":false,"id":"847b2901-95e3-49b9-8756-e495872558a8","type":"msg"}}
]
}
Can anyone explain why it not binds all messages in first case?
Thanks to Filipe Manana from couchbase team, who answered this question.
This issue is caused by Erlang OTP R14B03 (and all prior versions) on 64 bits platforms.
Here is Filipe's commit that fixes this bug https://github.com/erlang/otp/commit/03d8c2877342d5ed57596330a61ec0374092f136
For more details see http://www.couchbase.com/forums/thread/couchbase-view-not-rerurning-all-values
NB CouchDB and CouchBase are as similar as Membase and MySQL = products that start with the same prefix. Best not to get them confused!
It might be related to whether sorting is done via ASCII (what you are expecting) vs via unicode collation. Some more information here http://wiki.apache.org/couchdb/View_collation but that applies to CouchDB, not CouchBase. Maybe that helps?
Related
How to get Sonarqube Metrics (i.e. vulnerabilities: A; B; C; D; E) Ratings via the web api
Please assist with the above. I have successfully implemented a web api to sonarqube and I am able to get values for the metrics I specify in the web api (ref: https://gazelle.ihe.net/sonar/web_api/api/measures) The problem I have is, I want to get the metrics ratings (i.e A; B; C; D) for each metric. and the api only returns the values and not the ratings. I also tried using component_tree and type by the ratings are not returned. Please assist:)
The answer to this question is as follows: The 'vulnerabilities' ratings (A,B,C,D,E) are represented by the metricKey 'security_rating', as vulnerabilities are under Security in sonarqube Web API Request call: sonarqubeurl/api/measures/component?metricKeys=security_rating The 'security_rating' is structured as follows: (1=A,2=B,3=C,4=D,5=E), it returns numbers (corresponding to the alphabets) instead of alphabets. I hope this helps others as well
Zabbix Agent 3.4.9 Active Monitoring Log file, Not supported: too many parameters
I'm trying to monitor the log file: /var/log/neo4j/debug.log I'm looking for the text: Application threads blocked for ######ms I have devised a regular expression for this as: Application threads blocked for (\d+)ms We want to skip old info: add skip as mode I want to pull out the number of MS so that the trigger will alert on blockages > 150ms.: \1 must be set as output parameter I constructed the key as: log[/var/log/neo4j/debug.log,Application threads blocked for (\d+)ms,,,skip,\1] in accordance with log[/path/to/file/file_name,< regexp >,< encoding >,< maxlines >,< mode >,< output >,< maxdelay >] Type of Information is: Log Update interval: 30s History storage period: 90d Timestamps appear in the log file as: 2018-10-03 13:29:20.460+0000 My timestamp appears as: yyyypMMpddphhpmmpss I have tried a bunch of different things over the past week trying to get it to stop showing a "Too Many Parameters" error in the GUI without success. I'm completely lost at this point. We have 49 other items working correctly (all others are passive). Active checks are enabled in zabbix_agentd.conf.
I know this is an old thread but it took me a while to solve this problem, so I'd like to share and hope it helps... According to the Zabbix official documentation the parameters usage for log (and logrt) keys should be: logrt[file_regexp,<regexp>,<encoding>,<maxlines>,<mode>,<output>,<maxdelay>] So, if we would use only the "skip" parameter, the item key should look like: logrt[MyLogFile.log,,,,skip,,] Nevertheless, it triggers the error "too many parameters". In fact, to solve this issue I configured this key in my environment with only one coma after the parameter, like this: logrt["MyLogFile.log","MyFilter",,,skip,] That's it... hope it helps someone else.
Update ttl for all records in aerospike
I was stuck in a situation that I have initialised a namesapce with default-ttl to 30 days. There was about 5 million data with that (30-day calculated) ttl-value. Actually, my requirement is that ttl should be zero(0), but It(ttl-30d) was kept with unaware or un-recognise. So, Now I want to update prev(old) 5 million data with new ttl-value (Zero). I've checked/tried "set-disable-eviction true", but it is not working, it is removing data according to (old)ttl-value. How do I overcome out this? (and I want to retrieve the removed data, How can I?). Someone help me.
First, eviction and expiration are two different mechanisms. You can disable evictions in various ways, such as the set-disable-eviction config parameter you've used. You cannot disable the cleanup of expired records. There's a good knowledge base FAQ What are Expiration, Eviction and Stop-Writes?. Unfortunately, the expired records that have been cleaned up are gone if their void time is in the past. If those records were merely evicted (i.e. removed before their void time due to crossing the namespace high-water mark for memory or disk) you can cold restart your node, and those records with a future TTL will come back. They won't return if either they were durably deleted or if their TTL is in the past (such records gets skipped). As for resetting TTLs, the easiest way would be to do this through a record UDF that is applied to all the records in your namespace using a scan. The UDF for your situation would be very simple: ttl.lua function to_zero_ttl(rec) local rec_ttl = record.ttl(rec) if rec_ttl > 0 then record.set_ttl(rec, -1) aerospike:update(rec) end end In AQL: $ aql Aerospike Query Client Version 3.12.0 C Client Version 4.1.4 Copyright 2012-2017 Aerospike. All rights reserved. aql> register module './ttl.lua' OK, 1 module added. aql> execute ttl.to_zero_ttl() on test.foo
Using a Python script would be easier if you have more complex logic, with filters etc. zero_ttl_operation = [operations.touch(-1)] query = client.query(namespace, set_name) query.add_ops(zero_ttl_operation) policy = {} job = query.execute_background(policy) print(f'executing job {job}') while True: response = client.job_info(job, aerospike.JOB_SCAN, policy={'timeout': 60000}) print(f'job status: {response}') if response['status'] != aerospike.JOB_STATUS_INPROGRESS: break time.sleep(0.5) Aerospike v6 and Python SDK v7.
Simperium Data Dictionary or Decoder Ring for Return Value on "all" call?
I've looked through all of the Simperium API docs for all of the different programming languages and can't seem to find this. Is there any documentation for the data returned from an ".all" call (e.g. api.todo.all(:cv=>nil, :data=>false, :username=>false, :most_recent=>false, :timeout=>nil) )? For example, this is some data returned: {"ccid"=>"10101010101010101010101010110101010", "o"=>"M", "cv"=>"232323232323232323232323232", "clientid"=>"ab-123123123123123123123123", "v"=>{ "date"=>{"o"=>"+", "v"=>"2015-08-20T00:00:00-07:00"}, "calendar"=>{"o"=>"+", "v"=>false}, "desc"=>{"o"=>"+", "v"=>"<p>test</p>\r\n"}, "location"=>{"o"=>"+", "v"=>"Los Angeles"}, "id"=>{"o"=>"+", "v"=>43} }, "ev"=>1, "id"=>"abababababababababababababab/10101010101010101010101010110101010"} I can figure out some of it just from context or from the name of the key but a lot of it is guesswork and trial and error. The one that concerns me is the value returned for the "o" key. I assume that a value of "M" is modify and a value of "+" is add. I've also run into "-" for delete and just recently discovered that there is also a "! '-'" which is also a delete but don't know what else it signifies. What other values can be returned in the "o" key? Are there other keys/values that can be returned but are rare? Is there documentation that details what can be returned (that would be the most helpful)? If it matters, I am using the Ruby API but I think this is a question that, if answered, can be helpful for all APIs.
The response you are seeing is a list of all of the changes which have occurred in the given bucket since some point in its history. In the case where cv is blank, it tries to get the full history. You can find some of the details in the protocol documentation though it's incomplete and focused on the WebSocket message syntax (the operations are the same however as with the HTTP API). The information provided by the v parameter is the result of applying the JSON-diff algorithm to the data between changes. With this diff information you can reconstruct the data at any given version as the changes stream in.
Atomic in MongoDB with transfer money
I'm new for MongoDB I make a simple application abount account in bank.an account can transfer money to others I design Account collection like that account { name:A age: 24 money: 100 } account { name:B age: 22 money: 300 } assuming that user A transfer 100$ for user B , there are 2 operations : 1) decrease 100$ in user A // update for document A 2) increase 100$ for user B // update with document B It said that atomic only applied for only single document but no mutiple document. I have a alter desgign Bank { name: address: Account[ { name:A age: 22 money: SS }, { name:B age: 23 money: S1S } ] } I have some question : If I use later way, How can I write transaction query (Can I use findAndModify() function?) ? Does MongoDB support transaction operations like Mysql (InnoDB)? Some pepple tell me that use Mysql for this project is the best way, and just only use MongoDB to save transaction information.(use extra collection named Transaction_money to save them), If I use both MongoDB and Mysql (InnoDB) how can make some operations below are atomic (fail or success whole): > 1) -100$ with user A > 2) +100$ with user B > 3) save transaction information like transaction { sender: A receiver: B money : 100 date: 05/04/2013 } Thank so much.
I am not sure this is what you are looking for: db.bank.update({name : "name"},{ "$inc" : {'Account.0.money' : -100, 'Account.1.money' : 100}}) update() operation is satisfies ACI properties of ( ACID ). Durability ( D ) depends on the mongo and application configuration while making query. You can prefer to use findAndModify() which won't yield lock on page fault MongoDB provides transactions within a document I can't understand, if your application requirement is very simple, then why you are trying to use mongodb. No doubt its a good data-store, but I guess MySql will satisfy all your requirements. Just FYI : There is a doc which is exactly the problem you are trying to solve. http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/ But I won't recommend you to use this because a single query ( transferring money) has been turned into sequence of queries. Hope it helped
If I use later way, How can I write transaction query (Can I use findAndModify() function?) ? There are a lot of mis-conceptions about what findAndModify does; it is not a transaction. That being said it is atomic, which is quite different. The reason for two phase commits and transactions in this sense is so that if something goes wrong you can fix it (Or at least have a 99.99% chance that corruption hasn't occurred) The problem with findAndModify is that it has no such transactional behaviour, not only that but MongoDB only provides atomic state upon single document level which means that, in the same call, if your functions change multiple documents you could actually have an in-consistent in-between state in your database data. This, of course, won't do for money handling. It is noted that MongoDB is not extremely great in these scenarios and you are trying to use MongoDB away from its purpose, with this in mind it is clear you have not researched your question well, as your next question shows: Does MongoDB support transaction operations like Mysql (InnoDB)? No it does not. With all that background info aside let's look at your schema: Bank { name: address: Account[{ name:A age: 22 money: SS },{ name:B age: 23 money: S1S }] } It is true that you could get a transaction query on here whereby the document would never be able to exist in a in-between state, only one or the other; as such no in-consistencies would exist. But then we have to talk more about the real world. A document in MongoDB is 16mb big. I do not think you would fit an entire bank into one document, so this schema is badly planned and useless. Instead you would require (maybe) a document per account holder in your bank with a subdocument of their accounts. With this you now have the problem that in-consistencies can occur. MongoDB, as #Abhishek states, does support client side 2 phase commits but these are not going to be as good as server-side within the database itself whereby the mongod can take safety precautions to ensure that the data is consistent at all times. So coming back to your last question: Some pepple tell me that use Mysql for this project is the best way, and just only use MongoDB to save transaction information.(use extra collection named Transaction_money to save them), If I use both MongoDB and Mysql (InnoDB) how can make some operations below are atomic (fail or success whole): I would say something a bit more robust than MySQL personally, I heard MSSQL is quite good for this.