Mercurial: push creates new remote head on branch - mercurial

I've read probably 15 other threads with the same problem, but every solution has been "try pulling and merging", which I've done several times.
Anyway, we have a main branch called "develop". I don't have permission to directly push to this branch. When I do hg push --new-branch, I get the message:
abort: push creates new remote head 872f21c639d0 on branch 'develop'!
Somehow, I guess I must have done changes to the develop branch without me knowing. So I though that only pushing my branch would work (as mercurial pushes all branches by default). But calling hg push -r . --new-branch gives me the same error. Why?
When I call hg heads develop I get the following:
changeset: 42407:872f21c639d0
branch: develop
parent: 42300:f25b068a235d
parent: 42313:1b2c4907cd9a
user: ---
date: Tue Jun 25 09:30:41 2019 +0200
summary: Merged
changeset: 42402:05374ad32e87
branch: develop
parent: 42396:5ab53c02c668
parent: 42401:552e0a676dad
user: ---
date: Wed Jun 26 06:10:22 2019 +0000
summary: ~some message~
Okay, strange, I have two heads. I tried to hg strip one of the heads on develop but then all the changes I did on MY branch disappeared, and there were still two heads (another head). Any idea on how to fix this?
EDIT: Output of hg log -G -b develop
o changeset: 42407:05374ad32e87
|\ branch: develop
| : parent: 42396:5ab53c02c668
| : parent: 42401:552e0a676dad
| : user: P K
| : date: Wed Jun 26 06:10:22 2019 +0000
| : summary: Merged in ... (pull request #3700)
| :
| \
| :\
| : \
| : :\
| : : \
| : : :\
| : : : \
| : : : :\
| : +-------o changeset: 42402:872f21c639d0
| : : : : : | branch: develop
| : : : : : | parent: 42300:f25b068a235d
| : : : : : | parent: 42313:1b2c4907cd9a
| : : : : : | user: Me
| : : : : : | date: Tue Jun 25 09:30:41 2019 +0200
| : : : : : | summary: Merged
| : : : : : |
o : : : : : | changeset: 42396:5ab53c02c668
|\ \ \ \ \ \ \ branch: develop
| : : : : : : : parent: 42393:8e323e33732f
| : : : : : : : parent: 42394:8bc5afa596e5
| : : : : : : : user: M L
| : : : : : : : date: Tue Jun 25 16:06:05 2019 +0000
| : : : : : : : summary: Merged in d... (pull request #3673)
| : : : : : : :
| \ \ \ \ \ \ \
| :\ \ \ \ \ \ \
| : \ \ \ \ \ \ \
| : :\ \ \ \ \ \ \
| : : \ \ \ \ \ \ \
| : : :\ \ \ \ \ \ \
o-----------------+-+ changeset: 42393:8e323e33732f
:\ \ \ \ \ \ \ \ \ \ \ branch: develop
: : : : : : : : : : : | parent: 42384:bc2542ffb64e
: : : : : : : : : : : | parent: 42390:61241d9fc224
: : : : : : : : : : : | user: M L
: : : : : : : : : : : | date: Tue Jun 25 15:25:11 2019 +0000
: : : : : : : : : : : | summary: Merged in ... (pull request #3706)
: : : : : : : : : : : |
: : : : : : : +-----o | changeset: 42384:bc2542ffb64e
: : : : : : : : : : |\ \ branch: develop
: : : : : : : : : : | : | parent: 42381:701c99b3e133
: : : : : : : : : : | : | parent: 42383:b9c8fa2d8f00
: : : : : : : : : : | : | user: P A
: : : : : : : : : : | : | date: Tue Jun 25 14:25:54 2019 +0000
: : : : : : : : : : | : | summary: Merged in ... (pull request #3689)
: : : : : : : : : : | : |
: +-----------+-----o : | changeset: 42381:701c99b3e133

Related

circe doesn't see field when it contains an array

I've got 2 "tests", of which the one where I'm trying to decode a user works, but the one where I'm trying to decode a list of users doesn't:
import User._
import io.circe._
import io.circe.syntax._
import io.circe.parser.decode
class UserSuite extends munit.FunSuite:
test("List of users can be decoded") {
val json = """|{
| "data" : [
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| },
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
| ]
|}""".stripMargin
println(decode[List[User]](json))
}
test("user can be decoded") {
val json = """|{
| "data" : {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
|}""".stripMargin
println(decode[User](json))
}
The failing one produces
Left(DecodingFailure(List, List(DownField(data))))
despite the fact that both the json's relevant structure and the decoders (below) are the same.
final case class User(
id: String,
name: String,
username: String
)
object User:
given Decoder[List[User]] =
deriveDecoder[List[User]].prepare(_.downField("data"))
given Decoder[User] =
deriveDecoder[User].prepare(_.downField("data"))
As far as I understand this should work, even according to one of Travis' older replies but it doesn't.
Is this a bug? Am I doing something wrong?
For reference, This is Scala 3.2.0 and circe 0.14.1
The thing is that that you need two different encoders for User, the one expecting data field to decode the 2nd json and the one not expecting data field while deriving decoder for a list. Otherwise the 1st json should be
"""|{
| "data" : [
| {
| "data" :
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
| },
| {
| "data" :
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
| }
| ]
|}""
It's better to be explicit now
final case class User(
id: String,
name: String,
username: String
)
object User {
val userDec: Decoder[User] = semiauto.deriveDecoder[User]
val preparedUserDec: Decoder[User] = userDec.prepare(_.downField("data"))
val userListDec: Decoder[List[User]] = {
implicit val dec: Decoder[User] = userDec
Decoder[List[User]].prepare(_.downField("data"))
}
}
val json =
"""|{
| "data" : [
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| },
| {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
| ]
|}""".stripMargin
decode[List[User]](json)(User.userListDec)
// Right(List(User(someId,someName,someusername), User(someId,someName,someusername)))
val json1 =
"""|{
| "data" : {
| "id" : "someId",
| "name" : "someName",
| "username" : "someusername"
| }
|}""".stripMargin
decode[User](json1)(User.preparedUserDec)
// Right(User(someId,someName,someusername))

JSON queries in PostgreSQL 9.6

presumed the following data:
select * from orderlist ;
id | orders | orderenvelope_id
----+--------------------------------------------+------------------
14 | { +| 13
| "orders" : [ { +|
| "orderType" : "OfferPurchaseOrder", +|
| "duration" : 1494413009450, +|
| "currencyCode" : "EUR" +|
| }, { +|
| "orderType" : "CustomerCreationOrder",+|
| "customerData" : { +|
| "customerType" : "PERSONAL_ACCOUNT",+|
| "contactData" : { +|
| "contactQuality" : "VALID", +|
| "firstName" : "Peter", +|
| "lastName" : "Pan" +|
| } +|
| } +|
| } ] +|
| } |
I want to get the 'OfferPurchaseOrder'; therefore the following SELECT was used:
select id, orderenvelope_id, o from orderlist list, json_array_elements(list.orders->'orders') as o where o->>'orderType' = 'OfferPurchaseOrder';
id | orderenvelope_id | o
----+------------------+-----------------------------------------
14 | 13 | { +
| | "orderType" : "OfferPurchaseOrder",+
| | "duration" : 1494413009450, +
| | "currencyCode" : "EUR" +
| | }
It looks as if works like a charm, only one thing: I want to integrate with Hibernate, so the colum should be named 'orders' instead of 'o' (as it was in the initial select); otherwise Hibernate will not be able to map the things properly.
Aside of this, the 'reduced' JSON-list should be in there, so the desired result should look like this:
id | orderenvelope_id | orders |
----+------------------+----------------------------------------+
14 | 13 | "orders" : [{ +
| | "orderType" : "OfferPurchaseOrder",+
| | "duration" : 1494413009450, +
| | "currencyCode" : "EUR" +
| | } +
| |]
Any hints?
Thx and regards
el subcomandante
If you can move to jsonb type than query can looks like:
WITH x AS (
SELECT id, orderenvelope_id, o
FROM orderlist list, jsonb_array_elements(list.orders->'orders') as o
WHERE o->>'orderType' = 'OfferPurchaseOrder'
)
SELECT id, orderenvelope_id, jsonb_set('{}'::jsonb, '{orders}'::text[], jsonb_agg(o))
FROM x
GROUP BY 1,2
;
However, if you can't use jsonb just cast text to json:
WITH x AS (
SELECT id, orderenvelope_id, o
FROM orderlist list, json_array_elements(list.orders->'orders') as o
WHERE o->>'orderType' = 'OfferPurchaseOrder'
)
SELECT id, orderenvelope_id, ('{"orders": ' || json_agg(o) ||'}')::json
FROM x
GROUP BY 1,2
;

slick db.run does not insert action

I'm trying to do a simple insert in a MySql table using Slick. As you can see in the debug output below, the code gets executed but the values do not get inserted to the database.
This is the Database.scala code:
//import slick.jdbc.JdbcBackend._
import slick.dbio.DBIOAction
import slick.driver.MySQLDriver.api._
import slick.lifted.TableQuery
import java.sql.Timestamp
class Database {
val url = "jdbc:mysql://username=root:password=xxx#localhost/playdb"
val db = Database.forURL(url, driver = "com.mysql.jdbc.Driver")
val emrepo = TableQuery[EmailMessageTable]
override def finalize() {
db.close()
super.finalize()
}
protected class EmailMessageTable(tag: Tag) extends Table[EmailMessage](tag, "email_message") {
def id = column[Option[Long]]("id", O.AutoInc, O.PrimaryKey)
def email = column[String]("email")
def subject = column[String]("subject")
def body = column[String]("body")
def datain = column[Timestamp]("datain")
def email_id= column[Long]("email_id")
def * = (id, email, subject, body, datain, email_id) <> ((EmailMessage.apply _).tupled, EmailMessage.unapply)
def ? = (id.get.?, email.?, subject.?, body.?, datain.?, email_id.?).shaped.<>({ r => import r._; _1.map(_ =>
EmailMessage.tupled((_1, _2.get, _3.get, _4.get, _5.get, _6.get))) }, (_: Any) =>
throw new Exception("Inserting into ? projection not supported."))
}
def insert(m: EmailMessage) {
db.run(
(emrepo += m)
)
}
}
The calling code:
def toDatabase(m: EmailMessage): EmailMessage = {
val db = new Database()
println("HIT")
db.insert(m)
println("HIT 2")
println(m)
m
}
The case class object that is inserted into the database:
import java.sql.Timestamp
case class EmailMessage(
id: Option[Long],
email: String,
subject:String,
body:String,
datain: Timestamp,
email_id: Long
)
DEBUG output, showing the call done to Slick, and Slick debug output:
HIT
2016-09-06 16:08:41:563 -0300 [run-main-0] DEBUG slick.compiler.QueryCompiler - Source:
| TableExpansion
| table s2: Table email_message
| columns: TypeMapping
| 0: ProductNode
| 1: Path s2.id : Option[Long']
| 2: Path s2.email : String'
| 3: Path s2.subject : String'
| 4: Path s2.body : String'
| 5: Path s2.datain : java.sql.Timestamp'
| 6: Path s2.email_id : Long'
2016-09-06 16:08:41:587 -0300 [run-main-0] DEBUG slick.compiler.AssignUniqueSymbols - Detected features: UsedFeatures(false,true,false,false)
2016-09-06 16:08:41:597 -0300 [run-main-0] DEBUG slick.compiler.QueryCompiler - After phase assignUniqueSymbols:
| TableExpansion
| table s3: Table email_message
| columns: TypeMapping
| 0: ProductNode
| 1: Path s3.id : Option[Long']
| 2: Path s3.email : String'
| 3: Path s3.subject : String'
| 4: Path s3.body : String'
| 5: Path s3.datain : java.sql.Timestamp'
| 6: Path s3.email_id : Long'
2016-09-06 16:08:41:605 -0300 [run-main-0] DEBUG slick.compiler.QueryCompiler - After phase inferTypes: (no change)
2016-09-06 16:08:41:624 -0300 [run-main-0] DEBUG slick.compiler.QueryCompiler - After phase insertCompiler:
| ResultSetMapping : Vector[(String', String', String', java.sql.Timestamp', Long')]
| from s5: Insert allFields=[id, email, subject, body, datain, email_id] : (String', String', String', java.sql.Timestamp', Long')
| table s6: Table email_message : Vector[#t4<UnassignedType>]
| linear: ProductNode : (String', String', String', java.sql.Timestamp', Long')
| 1: Path s6.email : String'
| 2: Path s6.subject : String'
| 3: Path s6.body : String'
| 4: Path s6.datain : java.sql.Timestamp'
| 5: Path s6.email_id : Long'
| map: TypeMapping : Mapped[(Option[Long'], String', String', String', java.sql.Timestamp', Long')]
| 0: ProductNode : (Option[Long'], String', String', String', java.sql.Timestamp', Long')
| 1: InsertColumn id : Option[Long']
| 2: InsertColumn email : String'
| 0: Path s5._1 : String'
| 3: InsertColumn subject : String'
| 0: Path s5._2 : String'
| 4: InsertColumn body : String'
| 0: Path s5._3 : String'
| 5: InsertColumn datain : java.sql.Timestamp'
| 0: Path s5._4 : java.sql.Timestamp'
| 6: InsertColumn email_id : Long'
| 0: Path s5._5 : Long'
2016-09-06 16:08:41:638 -0300 [run-main-0] DEBUG slick.compiler.CodeGen - Compiling server-side and mapping with server-side:
| Insert allFields=[id, email, subject, body, datain, email_id] : (String', String', String', java.sql.Timestamp', Long')
| table s6: Table email_message : Vector[#t4<UnassignedType>]
| linear: ProductNode : (String', String', String', java.sql.Timestamp', Long')
| 1: Path s6.email : String'
| 2: Path s6.subject : String'
| 3: Path s6.body : String'
| 4: Path s6.datain : java.sql.Timestamp'
| 5: Path s6.email_id : Long'
2016-09-06 16:08:41:673 -0300 [run-main-0] DEBUG slick.relational.ResultConverterCompiler - Compiled ResultConverter
| TypeMappingResultConverter
| child: ProductResultConverter
| 1: CompoundResultConverter
| 2: SpecializedJdbcResultConverter$$anon$1 idx=1, name=email : String'
| 3: SpecializedJdbcResultConverter$$anon$1 idx=2, name=subject : String'
| 4: SpecializedJdbcResultConverter$$anon$1 idx=3, name=body : String'
| 5: SpecializedJdbcResultConverter$$anon$1 idx=4, name=datain : java.sql.Timestamp'
| 6: BaseResultConverter$mcJ$sp idx=5, name=email_id : Long'
2016-09-06 16:08:41:675 -0300 [run-main-0] DEBUG slick.compiler.CodeGen - Compiled server-side to:
| CompiledStatement "insert into `email_message` (`email`,`subject`,`body`,`datain`,`email_id`) values (?,?,?,?,?)" : (String', String', String', java.sql.Timestamp', Long')
2016-09-06 16:08:41:681 -0300 [run-main-0] DEBUG slick.compiler.QueryCompiler - After phase codeGen:
| ResultSetMapping : Vector[(String', String', String', java.sql.Timestamp', Long')]
| from s5: CompiledStatement "insert into `email_message` (`email`,`subject`,`body`,`datain`,`email_id`) values (?,?,?,?,?)" : (String', String', String', java.sql.Timestamp', Long')
| map: CompiledMapping : Mapped[(Option[Long'], String', String', String', java.sql.Timestamp', Long')]
| converter: TypeMappingResultConverter
| child: ProductResultConverter
| 1: CompoundResultConverter
| 2: SpecializedJdbcResultConverter$$anon$1 idx=1, name=email : String'
| 3: SpecializedJdbcResultConverter$$anon$1 idx=2, name=subject : String'
| 4: SpecializedJdbcResultConverter$$anon$1 idx=3, name=body : String'
| 5: SpecializedJdbcResultConverter$$anon$1 idx=4, name=datain : java.sql.Timestamp'
| 6: BaseResultConverter$mcJ$sp idx=5, name=email_id : Long'
2016-09-06 16:08:41:682 -0300 [run-main-0] DEBUG slick.compiler.QueryCompilerBenchmark - ------------------- Phase: Time ---------
2016-09-06 16:08:41:702 -0300 [run-main-0] DEBUG slick.compiler.QueryCompilerBenchmark - assignUniqueSymbols: 32,729098 ms
2016-09-06 16:08:41:703 -0300 [run-main-0] DEBUG slick.compiler.QueryCompilerBenchmark - inferTypes: 7,924984 ms
2016-09-06 16:08:41:703 -0300 [run-main-0] DEBUG slick.compiler.QueryCompilerBenchmark - insertCompiler: 18,786989 ms
2016-09-06 16:08:41:703 -0300 [run-main-0] DEBUG slick.compiler.QueryCompilerBenchmark - codeGen: 57,406605 ms
2016-09-06 16:08:41:704 -0300 [run-main-0] DEBUG slick.compiler.QueryCompilerBenchmark - TOTAL: 116,847676 ms
2016-09-06 16:08:41:709 -0300 [run-main-0] DEBUG slick.backend.DatabaseComponent.action - #1: SingleInsertAction [insert into `email_message` (`email`,`subject`,`body`,`datain`,`email_id`) values (?,?,?,?,?)]
HIT 2
EmailMessage(None,fernando#localhost,Me,teste daqui para ali rapido.,2016-09-06 16:08:41.099,1)
2016-09-06 16:08:41:746 -0300 [AsyncExecutor.default-1] DEBUG slick.jdbc.JdbcBackend.statement - Preparing statement: insert into `email_message` (`email`,`subject`,`body`,`datain`,`email_id`) values (?,?,?,?,?)
[success] Total time: 18 s, completed 06/09/2016 16:08:41
The value does not get inserted to the database. Why?
The most probable thing is that the the statement 'db.insert(m)' is asynchronous (it returns a Future), and you program is being finished before the Future ends, try to place a sleep or wait the future ends.
You can try something like this:
val result = db.insert(m)
Await.result(result, Duration.Inf)
...
I had a similar problem before which you can see here: How to configure Slick 3.1.1 for PostgreSQL? It seems to ignore my config parameters while running plain sql queries

how to provision a datastore in ckan for connecting cygnus in column persistance mode?

I am having the same problem as here and try to solve it, but I do not know how to properly format the datastore so cygnus will not throw the persistence error.
My orion suscription is this one:
(curl localhost:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' \
--header 'Accept: application/json' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "Event",
"isPattern": "false",
"id": "es-leon-0"
},
{
"type": "Event",
"isPattern": "false",
"id": "es-leon-1"
}
],
"attributes": [
"IdEvent", "IdUser", "Title"
],
"reference": "http://localhost:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [ ]
}
],
"throttling": "PT5S"
}
EOF
My cygnus config:
ygnusagent.sources = http-source
cygnusagent.sinks = ckan-sink
cygnusagent.channels = ckan-channel
cygnusagent.sources.http-source.channels = ckan-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = Papel
cygnusagent.sources.http-source.handler.default_service_path = Test
cygnusagent.sources.http-source.handler.events_ttl = 5
cygnusagent.sources.http-source.interceptors = ts gi
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
cygnusagent.sources.http-source.interceptors.gi.gropuing_rules_conf_file = /Applications/apache-flume-1.4.0-bin/conf/grouping_rules.conf
cygnusagent.channels.ckan-channel.type = memory
cygnusagent.channels.ckan-channel.capacity = 1000
cygnusagent.channels.ckan-channel.transactionCapacity = 100
# ============================================
# OrionCKANSink configuration
# channel name from where to read notification events
cygnusagent.sinks.ckan-sink.channel = ckan-channel
# sink class, must not be changed
cygnusagent.sinks.ckan-sink.type = com.telefonica.iot.cygnus.sinks.OrionCKANSink
# true if the grouping feature is enabled for this sink, false otherwise
cygnusagent.sinks.ckan-sink.enable_grouping = false
# true if lower case is wanted to forced in all the element names, false otherwise
cygnusagent.sinks.hdfs-sink.enable_lowercase = false
# the CKAN API key to use
cygnusagent.sinks.ckan-sink.api_key = xxxxx
# the FQDN/IP address for the CKAN API endpoint
cygnusagent.sinks.ckan-sink.ckan_host = ckan-demo.ckan.io
# the port for the CKAN API endpoint
cygnusagent.sinks.ckan-sink.ckan_port = 80
# Orion URL used to compose the resource URL with the convenience operation URL to query it
cygnusagent.sinks.ckan-sink.orion_url = http://localhost:1026
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.ckan-sink.attr_persistence = column
# enable SSL for secure Http transportation; 'true' or 'false'
cygnusagent.sinks.ckan-sink.ssl = false
# number of notifications to be included within a processing batch
cygnusagent.sinks.ckan-sink.batch_size = 100
# timeout for batch accumulation
cygnusagent.sinks.ckan-sink.batch_timeout = 60
# number of retries upon persistence error
cygnusagent.sinks.ckan-sink.batch_ttl = 10
Cygnus is receiving right it but then shows the following error:
time=2016-04-21T07:44:57.504CDT | lvl=INFO | trans=1461242686-614-0000000001 | srv=Papel | subsrv=Test | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[231] : Starting transaction (1461242686-614-0000000001)
time=2016-04-21T07:44:57.528CDT | lvl=INFO | trans=1461242686-614-0000000001 | srv=Papel | subsrv=Test | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[258] : Received data ({ "subscriptionId" : "571897360e94f9fa53829885", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "Event", "isPattern" : "false", "id" : "es-leon-0", "attributes" : [ { "name" : "IdEvent", "type" : "text", "value" : "1084" }, { "name" : "IdUser", "type" : "text", "value" : "18" }, { "name" : "Title", "type" : "text", "value" : "Papes" } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
time=2016-04-21T07:44:57.528CDT | lvl=INFO | trans=1461242686-614-0000000001 | srv=Papel | subsrv=Test | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[280] : Event put in the channel, id=2024732986
time=2016-04-21T07:45:50.771CDT | lvl=INFO | trans=1461242686-614-0000000001 | srv=Papel | subsrv=Test | function=persistAggregation | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionCKANSink[417] : [ckan-sink] Persisting data at OrionCKANSink (orgName=papel, pkgName=papel_test, resName=es-leon-0_event, data={"recvTime": "2016-04-21T12:44:57.497Z","fiwareServicePath": "Test","entityId": "es-leon-0","entityType": "Event","Title": "Papes"},{"recvTime": "2016-04-21T12:44:57.528Z","fiwareServicePath": "Test","entityId": "es-leon-0","entityType": "Event","IdEvent": "1084","IdUser": "18","Title": "Papes"})
time=2016-04-21T07:45:51.875CDT | lvl=ERROR | trans=1461242686-614-0000000001 | srv=Papel | subsrv=Test | function=processNewBatches | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionSink[426] : Runtime error (Cannot persist the data (orgName=papel, pkgName=papel_test, resName=es-leon-0_event))
As said in here, I created the corresponding datastore: http://ckan-demo.ckan.io/dataset/papel-test/resource/8d7cb489-878e-465e-8c8c-60ea537411e0
But don't know how to format it or if the csv is the correct format.
Thanks
*Note: I tried in row mode and all works, but it's not what I want.
**Note: I also found an error in the previewer software changing the title of my column "Title" to the title of the page "CKAN Demo".
EDITED:
I have done what is said in the documentation:
Column: A single row is upserted for all the notified context attributes. This kind of row will contain two fields per each entity's attribute (one for the value, called <attrName>, and other for the metadata, called <attrName>_md), plus four additional fields:
recvTime: UTC timestamp in human-redable format (ISO 8601).
fiwareServicePath: The notified one or the default one.
entityId: Notified entity identifier.
entityType: Notified entity type.
But still have the same error:
time=2016-04-25T05:17:48.790CDT | lvl=ERROR | trans=1461579403-571-0000000000 | srv=Papel | subsrv=Test | function=processNewBatches | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionSink[426] : Runtime error (Cannot persist the data (orgName=papel, pkgName=papel_test, resName=es-leon-0_event))
First of all, you'll need a CKAN organization and package/dataset before creating a resource and an associated datastore in order to persist the data.
Creating an organization, let's say in demo.ckan.org; the organization name is frb, because our entity will be in that FIWARE service:
$ curl -X POST "http://demo.ckan.org/api/3/action/organization_create" -d '{"name":"frb"}' -H "Authorization: xxxxxxxx"
Creating a package/dataset within the above organization; the package name is frb_test, because our entity will be in the FIWARE service frb and in the FIWARE service path test:
$ curl -X POST "http://demo.ckan.org/api/3/action/package_create" -d '{"name":"frb_test","owner_org":"frb"}' -H "Authorization: xxxxxxxx"
Creating a resource within the above package/dataset (the package ID is given in the response to the above package creation request); the name of the resource is room1_room because the entity ID will be room1 and its type room:
$ curl -X POST "http://demo.ckan.org/api/3/action/resource_create" -d '{"name":"room1_room","url":"none","format":"","package_id":"d35fca28-732f-4096-8376-944563f175ba"}' -H "Authorization: xxxxxxxx"
Finally, and answering to your question, creating a datastore associated to the above resource and suitable for receiving Cgynus data in column mode (the resource ID is given in the response to the above resource creation request):
$ curl -X POST "http://demo.ckan.org/api/3/action/datastore_create" -d '{"fields":[{"id":"recvTime","type":"text"}, {"id":"fiwareServicePath","type":"text"}, {"id":"entityId","type":"text"}, {"id":"entityType","type":"text"}, {"id":"temperature","type":"float"}, {"id":"temperature_md","type":"json"}],"resource_id":"48c120df-5bcd-48c7-81fa-8ecf4e4ef9d7","force":"true"}' -H "Authorization: xxxxxxxx"
Now, Cygnus is able to persist data for an entity with ID room1 of type room in the frb service, test service path:
time=2016-04-26T15:54:45.753CEST | lvl=INFO | corr=b465ffb8-710f-4cd3-9573-dc3799f774f9 | trans=b465ffb8-710f-4cd3-9573-dc3799f774f9 | svc=frb | subsvc=/test | function=getEvents | comp=cygnusagent | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[240] : Starting internal transaction (b465ffb8-710f-4cd3-9573-dc3799f774f9)
time=2016-04-26T15:54:45.754CEST | lvl=INFO | corr=b465ffb8-710f-4cd3-9573-dc3799f774f9 | trans=b465ffb8-710f-4cd3-9573-dc3799f774f9 | svc=frb | subsvc=/test | function=getEvents | comp=cygnusagent | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[256] : Received data ({ "subscriptionId" : "51c0ac9ed714fb3b37d7d5a8", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "attributes" : [ { "name" : "temperature", "type" : "centigrade", "value" : "26.5" } ], "type" : "room", "isPattern" : "false", "id" : "room1" }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
time=2016-04-26T15:55:07.843CEST | lvl=INFO | corr=b465ffb8-710f-4cd3-9573-dc3799f774f9 | trans=b465ffb8-710f-4cd3-9573-dc3799f774f9 | svc=frb | subsvc=/test | function=processNewBatches | comp=cygnusagent | msg=com.telefonica.iot.cygnus.sinks.NGSISink[342] : Batch accumulation time reached, the batch will be processed as it is
time=2016-04-26T15:55:07.844CEST | lvl=INFO | corr=b465ffb8-710f-4cd3-9573-dc3799f774f9 | trans=b465ffb8-710f-4cd3-9573-dc3799f774f9 | svc=frb | subsvc=/test | function=processNewBatches | comp=cygnusagent | msg=com.telefonica.iot.cygnus.sinks.NGSISink[396] : Batch completed, persisting it
time=2016-04-26T15:55:07.846CEST | lvl=INFO | corr=b465ffb8-710f-4cd3-9573-dc3799f774f9 | trans=b465ffb8-710f-4cd3-9573-dc3799f774f9 | svc=frb | subsvc=/test | function=persistAggregation | comp=cygnusagent | msg=com.telefonica.iot.cygnus.sinks.NGSICKANSink[419] : [ckan-sink] Persisting data at OrionCKANSink (orgName=frb, pkgName=frb_test, resName=room1_room, data={"recvTime": "2016-04-26T13:54:45.756Z","fiwareServicePath": "/test","entityId": "room1","entityType": "room","temperature": "26.5"})
time=2016-04-26T15:55:08.948CEST | lvl=INFO | corr=b465ffb8-710f-4cd3-9573-dc3799f774f9 | trans=b465ffb8-710f-4cd3-9573-dc3799f774f9 | svc=frb | subsvc=/test | function=processNewBatches | comp=cygnusagent | msg=com.telefonica.iot.cygnus.sinks.NGSISink[400] : Finishing internal transaction (b465ffb8-710f-4cd3-9573-dc3799f774f9)
The insertion can be checked through the CKAN API as well:
$ curl -X POST "http://demo.ckan.org/api/3/action/datastore_search" -d '{"resource_id":"48c120df-5bcd-48c7-81fa-8ecf4e4ef9d7"}' -H "Authorization: xxxxxxxx"

Warning on Cygnus avoids data persistence on Cosmos

This is a cygnus agent log:
16 Sep 2015 12:30:19,820 INFO [521330370#qtp-1739580287-1] (com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents:236) - Received data (<notifyContextRequest><subscriptionId>55f932e6c06c4173451bbe1c</subscriptionId><originator>localhost</originator>...<contextAttribute><name>utctime</name><type>string</type><contextValue>2015-9-16 9:37:52</contextValue></contextAttribute></contextAttributeList></contextElement><statusCode><code>200</code><reasonPhrase>OK</reasonPhrase></statusCode></contextElementResponse></contextResponseList></notifyContextRequest>)
16 Sep 2015 12:30:19,820 INFO [521330370#qtp-1739580287-1] (com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents:258) - Event put in the channel (id=1145647744, ttl=0)
16 Sep 2015 12:30:19,820 WARN [SinkRunner-PollingRunner-DefaultSinkProcessor] (com.telefonica.iot.cygnus.sinks.OrionSink.process:184) -
16 Sep 2015 12:30:19,820 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (com.telefonica.iot.cygnus.sinks.OrionSink.process:193) - Finishing transaction (1442395508-572-0000013907)
We conserve the same configuration than this question:
Fiware Cygnus Error.
Although the Cygnus agent receives data correctly from the Context Broker suscription, Cosmos doesn't receive any data.
Thanks in advance, again :)
Independentely of the reason that leaded you to comment the grouing rules part (nevertheless, I think it was because my own wrong advice at https://jira.fiware.org/browse/HELC-986 :)), that part cannot be commented and must be added to the configuration:
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
Once added that part, most probably another problem will arise: the performance of Cygnus will be very poor (that was the reason I wrongly adviced the user at https://jira.fiware.org/browse/HELC-986 to comment the grouping feature, in an attempt to increase the performance by removing processing steps). The reason is the latest version of Cygnus (0.8.2) is not ready to deal with the HiveServer2 running in the Cosmos side (this server was recently upgraded from old HiveServer1 to HiveServer2) and each persistence operation delays for a lot. For instance:
time=2015-09-21T12:42:30.405CEST | lvl=INFO | trans=1442832138-143-0000000000 | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[150] : Starting transaction (1442832138-143-0000000000)
time=2015-09-21T12:42:30.407CEST | lvl=INFO | trans=1442832138-143-0000000000 | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[236] : Received data ({ "subscriptionId" : "51c0ac9ed714fb3b37d7d5a8", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "attributes" : [ { "name" : "temperature", "type" : "centigrade", "value" : "26.5" } ], "type" : "Room", "isPattern" : "false", "id" : "Room1" }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
time=2015-09-21T12:42:30.409CEST | lvl=INFO | trans=1442832138-143-0000000000 | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[258] : Event put in the channel (id=1966649489, ttl=10)
time=2015-09-21T12:42:30.462CEST | lvl=INFO | trans=1442832138-143-0000000000 | function=process | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionSink[128] : Event got from the channel (id=1966649489, headers={fiware-servicepath=rooms, destination=other_rooms, content-type=application/json, fiware-service=deleteme2, ttl=10, transactionId=1442832138-143-0000000000, timestamp=1442832150410}, bodyLength=460)
time=2015-09-21T12:42:30.847CEST | lvl=INFO | trans=1442832138-143-0000000000 | function=persist | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionHDFSSink[330] : [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (deleteme2/rooms/other_rooms/other_rooms.txt), Data ({"recvTimeTs":"1442832150","recvTime":"2015-09-21T10:42:30.410Z","entityId":"Room1","entityType":"Room","attrName":"temperature","attrType":"centigrade","attrValue":"26.5","attrMd":[]})
time=2015-09-21T12:42:31.529CEST | lvl=INFO | trans=1442832138-143-0000000000 | function=provisionHiveTable | comp=Cygnus | msg=com.telefonica.iot.cygnus.backends.hdfs.HDFSBackendImpl[185] : Creating Hive external table=frb_deleteme2_rooms_other_rooms_row
(a big timeout)
A workaround is to configure as hive_host an unreachable IP address such as fake.cosmos.lab.fiware.org:
time=2015-09-21T12:44:58.278CEST | lvl=INFO | trans=1442832280-746-0000000001 | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[150] : Starting transaction (1442832280-746-0000000001)
time=2015-09-21T12:44:58.280CEST | lvl=INFO | trans=1442832280-746-0000000001 | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[236] : Received data ({ "subscriptionId" : "51c0ac9ed714fb3b37d7d5a8", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "attributes" : [ { "name" : "temperature", "type" : "centigrade", "value" : "26.5" } ], "type" : "Room", "isPattern" : "false", "id" : "Room1" }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
time=2015-09-21T12:44:58.280CEST | lvl=INFO | trans=1442832280-746-0000000001 | function=getEvents | comp=Cygnus | msg=com.telefonica.iot.cygnus.handlers.OrionRestHandler[258] : Event put in the channel (id=1640732993, ttl=10)
time=2015-09-21T12:44:58.283CEST | lvl=INFO | trans=1442832280-746-0000000001 | function=process | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionSink[128] : Event got from the channel (id=1640732993, headers={fiware-servicepath=rooms, destination=other_rooms, content-type=application/json, fiware-service=deleteme3, ttl=10, transactionId=1442832280-746-0000000001, timestamp=1442832298280}, bodyLength=460)
time=2015-09-21T12:44:58.527CEST | lvl=INFO | trans=1442832280-746-0000000001 | function=persist | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionHDFSSink[330] : [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (deleteme3/rooms/other_rooms/other_rooms.txt), Data ({"recvTimeTs":"1442832298","recvTime":"2015-09-21T10:44:58.280Z","entityId":"Room1","entityType":"Room","attrName":"temperature","attrType":"centigrade","attrValue":"26.5","attrMd":[]})
time=2015-09-21T12:44:59.148CEST | lvl=INFO | trans=1442832280-746-0000000001 | function=provisionHiveTable | comp=Cygnus | msg=com.telefonica.iot.cygnus.backends.hdfs.HDFSBackendImpl[185] : Creating Hive external table=frb_deleteme3_rooms_other_rooms_row
time=2015-09-21T12:44:59.304CEST | lvl=ERROR | trans=1442832280-746-0000000001 | function=doCreateTable | comp=Cygnus | msg=com.telefonica.iot.cygnus.backends.hive.HiveBackend[77] : Runtime error (The Hive table cannot be created. Hive query='create external table frb_deleteme3_rooms_other_rooms_row (recvTimeTs bigint, recvTime string, entityId string, entityType string, attrName string, attrType string, attrValue string, attrMd array<string>) row format serde 'org.openx.data.jsonserde.JsonSerDe' location '/user/frb/deleteme3/rooms/other_rooms''. Details=Could not establish connection to fake.cosmos.lab.fiware.org:10000/default?user=frb&password=llBl3dQsMhX2sEPtPuf3izUGS92RZo: java.net.UnknownHostException: fake.cosmos.lab.fiware.org)
time=2015-09-21T12:44:59.305CEST | lvl=WARN | trans=1442832280-746-0000000001 | function=provisionHiveTable | comp=Cygnus | msg=com.telefonica.iot.cygnus.backends.hdfs.HDFSBackendImpl[210] : The HiveQL external table could not be created, but Cygnus can continue working... Check your Hive/Shark installation
time=2015-09-21T12:44:59.305CEST | lvl=INFO | trans=1442832280-746-0000000001 | function=process | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionSink[193] : Finishing transaction (1442832280-746-0000000001)
This will allow Cygnus to continue, despite the Hive tables are not automatically created, a minor problem (anyway, they would have never been created because of the current incompatibility with HiveServer2). Of course, this will be fixed in Cygnus 0.9.0 (it will be released at the end of September 2015).