Anorm's Row object no longer exists in Play 2.3 - anorm

After upgrading to Play 2.3.0 I get this compilation error on object Row
not found: value Row
I noticed the Row object no longer exists in play 2.3.0 (I've found only the Row trait). Looking at the documentation, pattern matching should be still supported in Play 2.3
http://www.playframework.com/documentation/2.3.x/ScalaAnorm
See "Using Pattern Matching" paragraph
Here's my code:
def findById(aId: Long) = {
DB.withConnection { implicit conn =>
SQL(byIdStmt).on("id" -> aId)().map {
case Row(id:Integer, Some(userId:String), Some(description:String),
Some(solrCriteria:String), Some(solrCriteriaHash:String),
Some(hits:Integer), Some(lastPerformedUtc:java.sql.Timestamp), Some(notify:Boolean) ) =>
new UserInquiry(id.toLong, userId, description, solrCriteria, solrCriteriaHash,
hits, lastPerformedUtc, notify)
}.head
}
}
How to solve that?

As said, this pattern matching is restored on Play master by https://github.com/playframework/playframework/pull/3049 .

Related

Asserting timestamp with microseconds equals mysql database value in Ecto/Phoenix

I've been playing around with Elixir Phoenix and have a simple integration test that checks that a json response of a model is the same as the json-rendered representation of that model.
The test looks like this:
test "#show renders a single link" do
conn = get_authenticated_conn()
link = insert(:link)
conn = get conn, link_path(conn, :show, link)
assert json_response(conn, 200) == render_json(LinkView, "show.json", link: link)
end
This used to work fine but following a recent mix deps.update the test has broken due a precision problem with the timestamps of the model. Here is the output from the test:
Assertion with == failed
code: json_response(conn, 200) == render_json(LinkView, "show.json", link: link)
lhs: %{"id" => 10, "title" => "A handy site to find stuff on the internet", "url" => "http://google.com", "inserted_at" => "2017-01-09T19:27:57.000000", "updated_at" => "2017-01-09T19:27:57.000000"}
rhs: %{"id" => 10, "title" => "A handy site to find stuff on the internet", "url" => "http://google.com", "inserted_at" => "2017-01-09T19:27:56.606085", "updated_at" => "2017-01-09T19:27:56.606093"}
We can see that the timestamps of the response given by the controller compared to the json rendering of the model do not match. This is because the MySQL database (5.7) rounds microseconds down to 0 whilst the in-memory Ecto model representation supports higher accuracy. My migration just uses Ecto's timestamps function.
What's the best way to get these tests to pass? I don't particularly care about microsecond precision for my timestamps but clearly Ecto has made it more accurate in a recent update. I have a feeling it might be a problem with MariaEx but not sure how to solve.
As mentioned in the Ecto v2.1 CHANGELOG, to get the old behavior of not keeping usec in automatic timestamps (like it was until Ecto < v2.1), you can add the following module attribute just before the call to schema in the relevant model(s):
#timestamps_opts [usec: false]

Parse complex Json string contained in Hadoop

I want to parse a string of complex JSON in Pig. Specifically, I want Pig to understand my JSON array as a bag instead of as a single chararray. I found that complex JSON can be parsed by using Twitter's Elephant Bird or Mozilla's Akela library. (I found some additional libraries, but I cannot use 'Loader' based approach since I use HCatalog Loader to load data from Hive.)
But, the problem is the structure of my data; each value of Map structure contains value part of complex JSON. For example,
1. My table looks like (WARNING: type of 'complex_data' is not STRING, a MAP of <STRING, STRING>!)
TABLE temp_table
(
user_id BIGINT COMMENT 'user ID.',
complex_data MAP <STRING, STRING> COMMENT 'complex json data'
)
COMMENT 'temp data.'
PARTITIONED BY(created_date STRING)
STORED AS RCFILE;
2. And 'complex_data' contains (a value that I want to get is marked with two *s, so basically #'d'#'f' from each PARSED_STRING(complex_data#'c') )
{ "a": "[]",
"b": "\"sdf\"",
"**c**":"[{\"**d**\":{\"e\":\"sdfsdf\"
,\"**f**\":\"sdfs\"
,\"g\":\"qweqweqwe\"},
\"c\":[{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"}]
},
{\"**d**\":{\"e\":\"sdfsdf\"
,\"**f**\":\"sdfs\"
,\"g\":\"qweqweqwe\"},
\"c\":[{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"},
{\"d\":21321,\"e\":\"ewrwer\"}]
},]"
}
3. So, I tried... (same approach for Elephant Bird)
REGISTER '/path/to/akela-0.6-SNAPSHOT.jar';
DEFINE JsonTupleMap com.mozilla.pig.eval.json.JsonTupleMap();
data = LOAD temp_table USING org.apache.hive.hcatalog.pig.HCatLoader();
values_of_map = FOREACH data GENERATE complex_data#'c' AS attr:chararray; -- IT WORKS
-- dump values_of_map shows correct chararray data per each row
-- eg) ([{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... }])
([{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... },
{"d":{"e":"sdfsdf","f":"sdfs","g":"sdf"},... }]) ...
attempt1 = FOREACH data GENERATE JsonTupleMap(complex_data#'c'); -- THIS LINE CAUSE AN ERROR
attempt2 = FOREACH data GENERATE JsonTupleMap(CONCAT(CONCAT('{\\"key\\":', complex_data#'c'), '}'); -- IT ALSO DOSE NOT WORK
I guessed that "attempt1" was failed because the value doesn't contain full JSON. However, when I CONCAT like "attempt2", I generate additional \ mark with. (so each line starts with {\"key\": ) I'm not sure that this additional marks breaks the parsing rule or not. In any case, I want to parse the given JSON string so that Pig can understand. If you have any method or solution, please Feel free to let me know.
I finally solved my problem by using jyson library with jython UDF.
I know that I can solve it by using JAVA or other languages.
But, I think that jython with jyson is the most simplist answer to this issue.

sinatra +Datamapper + mysql

I am using Sinatra and DataMapper with MySQL and i getting issues when i query the database.
My models.rb is the folloging:
require 'sinatra'
require 'dm-core'
require 'dm-migrations/adapters/dm-mysql-adapter'
DataMapper::Logger.new("log/datamapper.log", :debug)
DataMapper.setup(:default, 'mysql://user:password#localhost/testdb')
class Item
include DataMapper::Resource
property :id, Serial
property :item, String, :length => 50
end
DataMapper.finalize
DataMapper.auto_upgrade!
Item.create(item:"item_one")
Item.create(item:"item_two")
The items are inserted in the database but when i query de database always returns nil values, example:
(rdb:1) #items =Item.all
[#<Item #id=nil #item=nil>, #<Item #id=nil #item=nil>]
if i query the numbers of items i get the expected result:
(rdb:1) #items.count
2
I have tried to make a query directly getting the same result :
adapter = DataMapper.repository(:default).adapt
adapter.select("SELECT * FROM items")
Does anyone know what I'm doing wrong or have suggestions on what to look for to fix problem?
Add these two lines to models.rb:
adapter = DataMapper.repository(:default).adapter
print adapter.select("SELECT * FROM items")
(Notice .adapter, not .adapt.) It prints
[#<struct id=1, item="item_one">, #<struct id=2, item="item_two">]
Everything works as expected (ruby 2.1.7p400 (2015-08-18 revision 51632)).

GetMapping not working for Nest client in Elasticsearch

perhaps some of the documentation http://nest.azurewebsites.net/ is old because i'm running into a at least few issues...
i've got a json object 'search'. i am getting null returned from the GetMapping function. well, it returns a Nest.RootObjectMapping object, but all fields within are null. i can get the mapping fine using Sense or regular curl.
var mapping = elasticClient.GetMapping<MyJsonPOCO>();
any ideas?
also, just as example of other things going wrong, this search works, but adding 'fields' to it does not (i got the fields declaration per the documentation)
var result = elasticClient.Search<MyJsonPOCO>(s => s
.Query(q => q
.QueryString(qs => qs
.OnField(e => e.Title)
.Query("my search term"))));
if i use this query with the fields added (to just return 'title'), i get a json parser issue.
var result = elasticClient.Search<MyJsonPOCO>(s => s
.Fields(f => f.Title)
.Query(q => q
.QueryString(qs => qs
.OnField(e => e.Title)
.Query("my search term"))));
here's the error for that one:
An exception of type 'Newtonsoft.Json.JsonReaderException' occurred in Newtonsoft.Json.dll but was not handled in user code
Additional information: Error reading string. Unexpected token: StartArray. Path 'hits.hits[0].fields.title', line 1, position 227.
Elasticsearch 1.0 changed the way fields are returned in the search response
You need the NEST 1.0 beta1 release to work with Elasticsearch 1.0
http://www.elasticsearch.org/blog/introducing-elasticsearch-net-nest-1-0-0-beta1/
See also this github issue for more background information on the why and how to work with fields from 1.0 forward:
https://github.com/elasticsearch/elasticsearch-net/issues/590

MongoDB - Dynamically update an object in nested array

I have a document like this:
{
Name : val
AnArray : [
{
Time : SomeTime
},
{
Time : AnotherTime
}
...arbitrary more elements
}
I need to update "Time" to a Date type (right now it is string)
I would like to do something psudo like:
foreach record in document.AnArray { record.Time = new Date(record.Time) }
I've read the documentation on $ and "dot" notation as well as a several similar questions here, I tried this code:
db.collection.update({_id:doc._id},{$set : {AnArray.$.Time : new Date(AnArray.$.Time)}});
And hoping that $ would iterate the indexes of the "AnArray" property as I don't know for each record the length of it. But am getting the error:
SyntaxError: missing : after property id (shell):1
How can I perform an update on each member of the arrays nested values with a dynamic value?
There's no direct way to do that, because MongoDB doesn't support an update-expression that references the document. Moreover, the $ operator only applies to the first match, so you'd have to perform this as long as there are still fields where AnArray.Time is of $type string.
You can, however, perform that update client side, in your favorite language or in the mongo console using JavaScript:
db.collection.find({}).forEach(function (doc) {
for(var i in doc.AnArray)
{
doc.AnArray[i].Time = new Date(doc.AnArray[i].Time);
}
db.outcollection.save(doc);
})
Note that this will store the migrated data in a different collection. You can also update the collection in-place by replacing outcollection with collection.