I'm using Keycloak with an external OAuth server used as id provider.
When I try to login, Keycloak send an authentication backchannel request in which the OAuth server replies with a JWT.
When decoding that JWT, Keycloak fails with this exception
Caused by: com.fasterxml.jackson.core.JsonParseException: Numeric value (1539167070926) out of range of int
at [Source: (byte[])"{"sub":"20008203","aud":"Test-Keycloak","amr":["pwd","mobile"],"iss":"oauth","exp":1539167070926,"iat":1539163470926,"jti":"d24e5a11-1931-45a7-b77a-0c935ea40df8"}"; line: 1, column: 97]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1804)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663)
at com.fasterxml.jackson.core.base.ParserBase.convertNumberToInt(ParserBase.java:869)
at com.fasterxml.jackson.core.base.ParserBase._parseIntValue(ParserBase.java:801)
at com.fasterxml.jackson.core.base.ParserBase.getIntValue(ParserBase.java:645)
at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:472)
at com.fasterxml.jackson.databind.deser.std.NumberDeserializers$IntegerDeserializer.deserialize(NumberDeserializers.java:452)
at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:136)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
... 80 more
It seems like the exp value is too big. Is keycloak failing to decode ? is my OAuth server sending a bad value ? What can I do to have that expiration correctly decoded ?
The fact is that since the JWT draft was published (Feb 2014) the spec has changed regarding to the "exp" timestamp. The final spec, published at May 2015, (https://www.rfc-editor.org/rfc/rfc7519#section-4.1.4) simply requires a numeric date.
So, the RFC does not dictate the width of the numeric representation, and i assume that means (from the side of an application as keycloak) that we should use a wide-enough integer (64-bit) so that we do not overflow.
This situation can become more annoying in the short future because of the Y2038 problem (a signed 32-bit integer cannot represent dates after (roughly) 19 Jan 2038)
As far as I understand, it seems like the id provider I try to use doesn't honor the JWT spec. Indeed, the JWT spec states the following
The "exp" (expiration time) claim identifies the expiration time on or after which the JWT MUST NOT be accepted for processing. The processing of the "exp" claim requires that the current date/time MUST be before the expiration date/time listed in the "exp" claim. Implementers MAY provide for some small leeway, usually no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value. Use of this claim is OPTIONAL.
In other words, when exp is set, it must be an int value containing a number of seconds since EPOCH.
(Possibly a duplicate of Can't send a keyedMessage to brokers with partitioner.class=kafka.producer.DefaultPartitioner, although the OP of that question didn't mention kafka-python. And anyway, it never got an answer.)
I have a Python program that has been successfully (for many months) sending messages to the Kafka broker, using essentially the following logic:
producer = kafka.KafkaProducer(bootstrap_servers=[some_addr],
retries=3)
...
msg = json.dumps(some_message)
res = producer.send(some_topic, value=msg)
Recently, I tried to upgrade it to send messages to different partitions based on a definite key value extracted from the message:
producer = kafka.KafkaProducer(bootstrap_servers=[some_addr],
key_serializer=str.encode,
retries=3)
...
try:
key = some_message[0]
except:
key = None
msg = json.dumps(some_message)
res = producer.send(some_topic, value=msg, key=key)
However, with this code, no messages ever make it out of the program to the broker. I've verified that the key value extracted from some_message is always a valid string. Presumably I don't need to define my own partitioner, since, according to the documentation:
The default partitioner implementation hashes each non-None key using the same murmur2 algorithm as the java client so that messages with the same key are assigned to the same partition.
Furthermore, with the new code, when I try to determine what happened to my send by calling res.get (to obtain a kafka.FutureRecordMetadata), that call throws a TypeError exception with the message descriptor 'encode' requires a 'str' object but received a 'unicode'.
(As a side question, I'm not exactly sure what I'd do with the FutureRecordMetadata if I were actually able to get it. Based on the kafka-python source code, I assume I'd want to call either its succeeded or its failed method, but the documentation is silent on the point. The documentation does say that the return value of send "resolves to" RecordMetadata, but I haven't been able to figure out, from either the documentation or the code, what "resolves to" means in this context.)
Anyway: I can't be the only person using kafka-python 1.3.3 who's ever tried to send messages with a partitioning key, and I have not seen anything on teh Intertubes describing a similar problem (except for the SO question I referenced at the top of this post).
I'm certainly willing to believe that I'm doing something wrong, but I have no idea what that might be. Is there some additional parameter I need to supply to the KafkaProducer constructor?
The fundamental problem turned out to be that my key value was a unicode, even though I was quite convinced that it was a str. Hence the selection of str.encode for my key_serializer was inappropriate, and was what led to the exception from res.get. Omitting the key_serializer and calling key.encode('utf-8') was enough to get my messages published, and partitioned as expected.
A large contributor to the obscurity of this problem (for me) was that the kafka-python 1.3.3 documentation does not go into any detail on what a FutureRecordMetadata really is, nor what one should expect in the way of exceptions its get method can raise. The sole usage example in the documentation:
# Asynchronous by default
future = producer.send('my-topic', b'raw_bytes')
# Block for 'synchronous' sends
try:
record_metadata = future.get(timeout=10)
except KafkaError:
# Decide what to do if produce request failed...
log.exception()
pass
suggests that the only kind of exception it will raise is KafkaError, which is not true. In fact, get can and will (re-)raise any exception that the asynchronous publishing mechanism encountered in trying to get the message out the door.
I also faced the same error. Once I added json.dumps while sending the key, it worked.
producer.send(topic="first_topic", key=json.dumps(key)
.encode('utf-8'), value=json.dumps(msg)
.encode('utf-8'))
.add_callback(on_send_success).add_errback(on_send_error)
I've looked through all of the Simperium API docs for all of the different programming languages and can't seem to find this. Is there any documentation for the data returned from an ".all" call (e.g. api.todo.all(:cv=>nil, :data=>false, :username=>false, :most_recent=>false, :timeout=>nil) )?
For example, this is some data returned:
{"ccid"=>"10101010101010101010101010110101010",
"o"=>"M",
"cv"=>"232323232323232323232323232",
"clientid"=>"ab-123123123123123123123123",
"v"=>{
"date"=>{"o"=>"+", "v"=>"2015-08-20T00:00:00-07:00"},
"calendar"=>{"o"=>"+", "v"=>false},
"desc"=>{"o"=>"+", "v"=>"<p>test</p>\r\n"},
"location"=>{"o"=>"+", "v"=>"Los Angeles"},
"id"=>{"o"=>"+", "v"=>43}
},
"ev"=>1,
"id"=>"abababababababababababababab/10101010101010101010101010110101010"}
I can figure out some of it just from context or from the name of the key but a lot of it is guesswork and trial and error. The one that concerns me is the value returned for the "o" key. I assume that a value of "M" is modify and a value of "+" is add. I've also run into "-" for delete and just recently discovered that there is also a "! '-'" which is also a delete but don't know what else it signifies. What other values can be returned in the "o" key? Are there other keys/values that can be returned but are rare? Is there documentation that details what can be returned (that would be the most helpful)?
If it matters, I am using the Ruby API but I think this is a question that, if answered, can be helpful for all APIs.
The response you are seeing is a list of all of the changes which have occurred in the given bucket since some point in its history. In the case where cv is blank, it tries to get the full history.
You can find some of the details in the protocol documentation though it's incomplete and focused on the WebSocket message syntax (the operations are the same however as with the HTTP API).
The information provided by the v parameter is the result of applying the JSON-diff algorithm to the data between changes. With this diff information you can reconstruct the data at any given version as the changes stream in.
I hava a database, that contains chat messages between users (1 message copy per user: 1 message for sender and one for recepient). Running Couchbase server 2.0 cluster with 2 machines (1 linux and 1 windows).
I use such map function to retreive messages betwen users:
function (doc) {
if (doc.type == "msg"){
emit([doc.OwnerId, doc.SndrId, doc.Date], {"Date":doc.Date, "OwnerId":doc.OwnerId, "RcptId":doc.RcptId, "SndrId":doc.SndrId, "Text":doc.Text, "Unread":doc.Unread, "id": doc.id, "type":doc.type});
emit([doc.OwnerId, doc.RcptId, doc.Date], {"Date":doc.Date, "OwnerId":doc.OwnerId, "RcptId":doc.RcptId, "SndrId":doc.SndrId, "Text":doc.Text, "Unread":doc.Unread, "id": doc.id, "type":doc.type});
}
}
So if I try to receive some messages (ordered decending by date) from the beggining I used such startkey & endkey:
startkey=[1,2,{}]
endkey=[1,2,0]
But in this way I get not all messages. To get all messages I need to use
startkey=[1,2,{}]
endkey=[1,2]
Here is an example. For key with zero:
{"total_rows":1106,"rows":[
{"id":"msg_8aaca454-5580-4e49-a081-918d8eaba9d6","key":[8,75,1342200837278],"value":{"Date":1342200837278,"OwnerId":8,"RcptId":8,"SndrId":75,"Text":"02","Unread":false,"id":"8aaca454-5580-4e49-a081-918d8eaba9d6","type":"msg"}},
{"id":"msg_49417551-bdc9-477b-b8c2-1f36051bb930","key":[8,75,1342199880920],"value":{"Date":1342199880920,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"4","Unread":false,"id":"49417551-bdc9-477b-b8c2-1f36051bb930","type":"msg"}},
{"id":"msg_2724f077-1e76-4fbc-9a4b-f34cb71e2db0","key":[8,75,1342108023448],"value":{"Date":1342108023448,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"55","Unread":false,"id":"2724f077-1e76-4fbc-9a4b-f34cb71e2db0","type":"msg"}},
{"id":"msg_9cc91327-4ba3-45b5-ab64-f2ca63510e8d","key":[8,75,1341413650113],"value":{"Date":1341413650113,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"3","Unread":false,"id":"9cc91327-4ba3-45b5-ab64-f2ca63510e8d","type":"msg"}},
{"id":"msg_9a386663-8a2b-42d9-ae30-0634a98fe574","key":[8,75,1341413648335],"value":{"Date":1341413648335,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"1","Unread":false,"id":"9a386663-8a2b-42d9-ae30-0634a98fe574","type":"msg"}}
]
}
With keys without zero:
{"total_rows":1106,"rows":[
{"id":"msg_dc4b7758-1f0e-491c-a80d-bf3124dc0ad7","key":[8,75,1342200856186],"value":{"Date":1342200856186,"OwnerId":8,"RcptId":8,"SndrId":75,"Text":"03","Unread":false,"id":"dc4b7758-1f0e-491c-a80d-bf3124dc0ad7","type":"msg"}},
{"id":"msg_8aaca454-5580-4e49-a081-918d8eaba9d6","key":[8,75,1342200837278],"value":{"Date":1342200837278,"OwnerId":8,"RcptId":8,"SndrId":75,"Text":"02","Unread":false,"id":"8aaca454-5580-4e49-a081-918d8eaba9d6","type":"msg"}},
{"id":"msg_b2fe9ca0-aa28-41a6-ab3c-09ece6a5e14a","key":[8,75,1342200787811],"value":{"Date":1342200787811,"OwnerId":8,"RcptId":8,"SndrId":75,"Text":"01","Unread":true,"id":"b2fe9ca0-aa28-41a6-ab3c-09ece6a5e14a","type":"msg"}},
{"id":"msg_49417551-bdc9-477b-b8c2-1f36051bb930","key":[8,75,1342199880920],"value":{"Date":1342199880920,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"4","Unread":false,"id":"49417551-bdc9-477b-b8c2-1f36051bb930","type":"msg"}},
{"id":"msg_0d7dc822-b76c-42f9-87ba-3dc486100526","key":[8,75,1342180778835],"value":{"Date":1342180778835,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"Gcgjvv6gvvbnbvvbh B2B bb cb B2B","Unread":false,"id":"0d7dc822-b76c-42f9-87ba-3dc486100526","type":"msg"}},
{"id":"msg_6b26a65f-68e1-4728-87f1-59ea5e0f0c5d","key":[8,75,1342114611144],"value":{"Date":1342114611144,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"546546546546546546546546546546","Unread":false,"id":"6b26a65f-68e1-4728-87f1-59ea5e0f0c5d","type":"msg"}},
{"id":"msg_f89b09ac-ccf1-4958-85c0-259b0b68752b","key":[8,75,1342108583566],"value":{"Date":1342108583566,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"123","Unread":false,"id":"f89b09ac-ccf1-4958-85c0-259b0b68752b","type":"msg"}},
{"id":"msg_2724f077-1e76-4fbc-9a4b-f34cb71e2db0","key":[8,75,1342108023448],"value":{"Date":1342108023448,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"55","Unread":false,"id":"2724f077-1e76-4fbc-9a4b-f34cb71e2db0","type":"msg"}},
{"id":"msg_9cc91327-4ba3-45b5-ab64-f2ca63510e8d","key":[8,75,1341413650113],"value":{"Date":1341413650113,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"3","Unread":false,"id":"9cc91327-4ba3-45b5-ab64-f2ca63510e8d","type":"msg"}},
{"id":"msg_847b2901-95e3-49b9-8756-e495872558a8","key":[8,75,1341413649161],"value":{"Date":1341413649161,"OwnerId":8,"RcptId":75,"SndrId":8,"Text":"2","Unread":false,"id":"847b2901-95e3-49b9-8756-e495872558a8","type":"msg"}}
]
}
Can anyone explain why it not binds all messages in first case?
Thanks to Filipe Manana from couchbase team, who answered this question.
This issue is caused by Erlang OTP R14B03 (and all prior versions) on 64 bits platforms.
Here is Filipe's commit that fixes this bug https://github.com/erlang/otp/commit/03d8c2877342d5ed57596330a61ec0374092f136
For more details see http://www.couchbase.com/forums/thread/couchbase-view-not-rerurning-all-values
NB CouchDB and CouchBase are as similar as Membase and MySQL = products that start with the same prefix. Best not to get them confused!
It might be related to whether sorting is done via ASCII (what you are expecting) vs via unicode collation. Some more information here http://wiki.apache.org/couchdb/View_collation but that applies to CouchDB, not CouchBase. Maybe that helps?