How to make array non zero based in rails - mysql

Im trying to create a d3.js graph from a rails database. This takes the following json
{
"nodes":[
{
"name":"Sebo",
"group":4,
"id":1
},
{
"name":"Pierre",
"group":5,
"id":2
},
{
"name":"Bilbo",
"group":2,
"id":3
},
{
"name":"yyyyyyyy",
"group":2,
"id":4
}
],
"links":[
{
"source":3,
"target":2,
"value":null
},
{
"source":3,
"target":1,
"value":null
},
{
"source":4,
"target":2,
"value":null
},
{
"source":4,
"target":1,
"value":null
}
]
}
I have created a button that allows a current user to follow another user. This then gets stored in a database and eventually the graph can be re-visualised.
The problem is that the request to update the database is based on the current user id (from the database). This is a non-zero based indexing so the first user is user id:1. However the json uses zero based indexing. This means that if user_id=1 connects to user_id=4 then when the graph is seen again this connection is attributed to user_id:2. What would be great is if I could specify that the user_id index could start with zero so that the array and database are in agreement. Is this the correct way to think about this? Can I force the indexing of the user table to start at zero eg in a rails schema?

Related

Secomea DCM 3529 multiple triggers or two values in the same reading

I am currently having troubles with Secomea Data Collection Module, I was wondering if anyone here might be able to enlighten me.
So I am collecting sensor data from the product Secomea 3529 through a portal called Secomea Sitemanager. I can't seem to find any information about my two questions below, I hope someone knows the answer.
Information about protocol used in this project
"Protocol": "S7/TCP",
"S7Access": {
"S7Model": "S7-200",
"S7Rack": 0,
"S7Slot": 1
When collecting data it is programmed using JSON as seen below.
I was wondering if it is possible to somehow have more than one TriggerSample and if so how is it set up?
{
"SampleName": "Sensor1",
"SampleDescription": "Some Description",
"SampleDataType": "bool",
"SamplesSaved": 3600,
"Aggregation": {
"Function": "compute",
"Expression": "Sensor2,1,/",
"TriggerSample": "Sensor3"
}
},
My other question, is it possible to have more than one S7Var?
{
"SampleName": "ModeCheck",
"SampleDescription": "Mode status",
"SampleDataType": "int16",
"SamplesSaved": 360,
"S7Var": {
"S7PLCVar": "LocationInMachineDB1",
"S7SampleInterval": 5
}
},

How to merge two collections keeping the document with highest timestamp in MongoDB

I'm creating a MongoDB client for a Go Application, using the MongoDB Go Driver. In particular, I have two databases with one collection each. These collection can be modified asynchronously by different clients, so i need to periodically synchronize them, keeping the most recently edited document, among those with the same id field
The two databases are stored on different hosts, so i need to export the collection from one host using mongoexport and import into the other host using mongoimport.
I already tried using mongoimport --collection=myColl --mode=merge, but this doesn't fit my goal because simply overrides the conflicting documents from myColl with the imported ones.
My idea is to import the json into a temp collection, but i don't know how to compare the timestamps during the aggregation/merge process.
My collections are structured Like this, any idea?
Collection 1
{"_id":"K1","value":"VAL1","timest":{"$date":"2021-09-26T09:05:09.942Z"}}
{"_id":"K2","value":"VAL2","timest":{"$date":"2021-09-26T09:05:10.234Z"}}
Collection 2
{"_id":"K2","value":"VAL3","timest":{"$date":"2021-09-26T09:15:09.942Z"}}
{"_id":"K3","value":"VAL4","timest":{"$date":"2021-09-26T09:15:10.234Z"}}
Desired Behaviour
Conflict
{"_id":"K2","value":"VAL2","timest":{"$date":"2021-09-26T09:05:10.234Z"}}
{"_id":"K2","value":"VAL3","timest":{"$date":"2021-09-26T09:15:09.942Z"}}[LATEST]
Output
{"_id":"K1","value":"VAL1","timest":{"$date":"2021-09-26T09:05:09.942Z"}}
{"_id":"K2","value":"VAL3","timest":{"$date":"2021-09-26T09:15:09.942Z"}}
{"_id":"K3","value":"VAL4","timest":{"$date":"2021-09-26T09:15:10.234Z"}}
You can use $merge
The bellow merges testdb1.coll to testdb2.coll based on same _id
And keeps the document with the latest date. If _id is not found, then document is inserted.
Data in
testdb1.coll
[{"_id" "K2","value" "VAL3","timest" (date "2021-09-26T09:15:09.942Z")}
{"_id" "K3","value" "VAL4","timest" (date "2021-09-26T09:15:10.234Z")}]
testdb2.coll
[{"_id" "K1","value" "VAL1","timest" (date "2021-09-26T09:05:09.942Z")}
{"_id" "K2","value" "VAL2","timest" (date "2021-09-26T09:05:10.234Z")}]
Results
testdb2.coll (after the merge)
{"_id": "K1", "value": "VAL1", "timest": {"$toDate": "2021-09-26T09:05:09.942Z"}}
{"_id": "K2", "value": "VAL3", "timest": {"$toDate": "2021-09-26T09:15:09.942Z"}}
{"_id": "K3", "value": "VAL4", "timest": {"$toDate": "2021-09-26T09:15:10.234Z"}}
Query
(instead of $let you could use $$new)
client.db("testdb1").collection("coll").aggregate(
[
{
"$merge": {
"into": {
"db": "testdb2",
"coll": "coll"
},
"on": [
"_id"
],
"let": {
"p_ROOT": "$$ROOT"
},
"whenMatched": [
{
"$replaceRoot": {
"newRoot": {
"$cond": [
{
"$gt": [
"$$p_ROOT.timest",
"$timest"
]
},
"$$p_ROOT",
"$$ROOT"
]
}
}
}
],
"whenNotMatched": "insert"
}
}
])
You can do following in an aggregation pipeline:
use $unionWith to combine the 2 collections
$sort to order them by timest
use $first to get the latest document
use $replaceRoot to get the final form your want
Here is the Mongo playground for your reference.

Extracting multiple associative objects from JSON type in MySQL

Trying to figure out the best way to query a MySQL table containing a json column.
I am successfully able to get product OR port.
SELECT ip, JSON_EXTRACT(json_data, '$.data[*].product' ) FROM `network`
This will return:
["ftp","ssh"]
What I'm looking to get is something like this, or some other way to represent association and handle null values:
[["ftp",21],["ssh",22],[NULL,23]]
Sample JSON
{
"key1":"Value",
"key2":"Value",
"key3":"Value",
"data": [
{
"product":"ftp",
"port":"21"
},
{
"product":"ssh",
"port":"22"
},
{
"port":"23"
}
]
}

MySQL JSON_EXTRACT wildcard field name matching

I have following JSON data in MySQL JSON FIELD
{
"Session0":[
{
"r_type":"main",
"r_flag":"other"
},
{
"r_type":"sub",
"r_flag":"kl"
}
],
"Session1":[
{
"r_type":"up",
"r_flag":"p2"
},
{
"r_type":"id",
"r_flag":"mb"
}
],
"Session2":[
{
"r_type":"main",
"r_flag":"p2"
},
{
"r_type":"id",
"r_flag":"mb"
}
]
}
Now, I wish to search ALL sessions where r_type="main". The session number can vary, hence I can not use a OR query. So, I need something like: where
JSON_EXTRACT(field,"$.Session**[*].r_type")="main"
But this does not seem to work. I need to be able to use wildcard in the property's name and then search an array for a property inside it. How do I do that?
Following work's, but that limits our ability to have unlimited Sessions numbers.
SELECT field->"$.Session1[*].r_type" from table

How does Simulating Joins works in Couchbase?

I have documents one is dependent to other. first:
{
"doctype": "closed_auctions",
"seller": {
"person": "person11304"
},
"buyer": {
"person": "person0"
},
"itemref": {
"item": "item1"
},
"price": 50.03,
"date": "11/17/2001",
"quantity": 1,
"type": "Featured",
"annotation": {
"author": {
"person": "person8597"
}
}
here you can see doc.buyer.person is dependent to another documents like this:
{
"doctype": "people",
"id": "person0",
"name": "Kasidit Treweek",
"profile": {
"income": 20186.59,
"interest": [
{
"category": "category251"
}
],
"education": "Graduate School",
"business": "No"
},
"watch": [
{
"open_auction": "open_auction8747"
}
]
}
How can I get buyer's name from these two documents? I means doc.buyer.person is connected with second document's id. It is join and from documentation it's not clear. http://docs.couchbase.com/couchbase-manual-2.0/#solutions-for-simulating-joins
Well, first off, let me point out that the very first sentence of the documentation section that you referenced says (I added the emphasis):
Joins between data, even when the documents being examined are
contained within the same bucket, are not possible directly within the
view system.
So, the quick answer to your question is that you have lots of options. Here are a few of them:
Assume you need only the name for a rather small subset of people. Create a view that outputs the PersonId as key and Name as value, then query the view for a specific name each time you need it.
Assume you need many people joined to many auctions. Download the full contents of the basic index from #1 and execute the join using linq.
Assume you need many properties of the person, not just the name. Download the Person document for each auction item.
Assume you need a small subset from both Auction and People. Index the fields from each that you need, include a type field, and emit all of them under the key of the Person. You will be able to query the view for all items belonging to the person.
The last approach was used in the example you linked to in your question. For performance, it will be necessary to tailor the approach to your usage scenario.
An other solution consist to merge datas in a custom reduce function.
// view
function (doc, meta) {
if (doc.doctype === "people") {
emit(doc.id, doc);
}
if (doc.doctype === "closed_auctions") {
emit(doc.buyer.person, doc);
}
}
// custom reduce
function (keys, values, rereduce) {
var peoples = values.filter(function (doc) {
return doc.doctype === "people";
});
for (var key in peoples) {
var people = peoples[key];
people.closed_auctions = (function (peopleId) {
return values.filter(function (doc) {
return doc.doctype === "closed_auctions" && doc.buyer.person === peopleId;
});
})(people.id);
}
return peoples;
}
And then you can query one user with "key" or multiple users with "keys".
After I don't know what the performances issues are with this method.