Hard to admit it but I struggle to manage state in my cli go app. What I basically want is to manage a list of objects with their properties in a file on disk. I want to be able to add objects with their properties, update objects and/or their properties and remove them if needed.
I thought it would be easy to just have a yml or json file and edit it with some kind of library but it seems harder than it should for a go beginner like me.
An example would be the following json structure.
{
"servers":
[
{ "hostname": "gandalf", "ip": "192.168.1.10", "color": "red" },
{ "hostname": "bilbo", "ip": "192.168.1.11", "color": "blue" },
{ "hostname": "frodo", "ip": "192.168.1.12", "color": "yellow" }
]
}
Now I want to be able to add, remove and edit servers. It doesn't have to be json, yaml is fine as well.
Do you girls and guys have any suggestions (libs and an example) on how to do it? I already tried Viper but adding new objects or even editing the existing ones seems to be impossible.
For settings that need to be human readable and will primarily be edited by a human a yaml or json file seems fine.
If the state is both written and read by the program itself and a full fledged database seems overkill then I would use a file based database. Probably a simple key/value store like boltdb or sqlite if the data needs more structure.
I personally use boltdb in such a case, because it is very lightweight, superfast and I like its simplicity.
-- edit --
With json as a file structure the problem is that you need to write and read the entire file every single time. A edit would be a read of the entire file, unmarshalling the json, changing something in the unmarshalled object, marshalling it back to json and writing the entire file again.
Thats why I would only use this for settings that the program reads once on startup and that's it.
Related
My goal is simple: I want to know what schema describes the given json file, meaning I need to mark it somehow. Example:
{
"Id": 100,
"name": "example"
}
What is not clear from the example below is what schema definition describes it, in other words what are the required fields and what are the other fields available in it.
I'm looking not only for validation, but rather ease-of-work with this jsonn file. For example, I'd like to use it as a configuration file. I need to know what properties are available and what properties are mandatory.
I'm looking for something like this:
{
"$schemaDefinition": "URL to the schema definition",
"Id": 100,
"name": "example"
}
What I don't know is what is the valid way doing it. I read through back and forth the json-schema.org but it doesn't discusses this. However, I remember I saw this solution somewhere.
As a result, I hope..., editor can pick up the schema and provide intellisense. Or the poor person who is going to work with the json file (configure something) knows where to look for available and mandatory configuration properties.
I have found it.
{
"$schema": "URL to the schema definition",
"Id": 100,
"name": "example"
}
If the schema file is stored in Github you have to use the url to the raw content.
Jetbrains products can pick up the schema definition without any problem.
I've stumbled upon this problem and after doing a bit of research here I am.
I have a json file which I want to validate against *Json-Schema. Now most of the libraries developed for this purpose in java, which are listed here, do a great a job in validating the file and provide well formatted reasons for it. But there's no feature of removing the invalid records completely. I tried ever-it/org package and couldn't find anything that pertains to the requirement.
For example if I'm having array inside a json object then its possible for one of the array elements to be invalid and the same I want to remove.
{ "key1" : "value1",
"key2" : [
{
"key3":"temp",
"key4":12
},
{ --> this object invalid for some reason hence should be removed
"key3":"temp",
"key4":"dsds"
},
{
"key3":"temp",
"key4":"anything"
}
]
}
Except Of Course manually removing the invalid records/objects from file by tracking the object from start with the help of error messages upon validation, Is there any already implemented solution for this?
*Json-Schema : It is basically a standard way of representing the data format and its structure for json files. There are few versions in it, draft-07 and draft-06 for instance. Read more about it here.
Is is there known science for getting JSON data logged via Cloud Watch imported into an Elasticsearch instance as well structured JSON?
That is -- I'm logging JSON data during the execution of an Amazon Lambda function.
This data is available via Amazon's Cloud Watch service.
I've been able to import this data into an elastic search instance using functionbeat, but the data comes in as an unstructured message.
"_source" : {
"#timestamp" : "xxx",
"owner" : "xxx",
"message_type" : "DATA_MESSAGE",
"cloud" : {
"provider" : "aws"
},
"message" : ""xxx xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx INFO {
foo: true,
duration_us: 19418,
bar: 'BAZ',
duration_ms: 19
}
""",
What I'm trying to do is get a document indexed into elastic that has a foo field, duration_us field, bar field, etc. Instead of one that has a plain text message field.
It seems like there are a few different ways to do this, but I'm wondering if there's a well trod path for this sort of thing using elastic's default tooling, or if I'm doomed to one more one-off hack.
Functionbeat is a good starting point and will allow you to keep it as "serverless" as possible.
To process the JSON, you can use the decode_json_fields processor.
The problem is that your message isn't really JSON though. Possible solutions I could think of:
A dissect processor that extracts the JSON message to pass it on to the decode_json_fields — both in the Functionbeat. I'm wondering if trim_chars couldn't be abused for that — trim any possible characters except for curly braces.
If that is not enough, you could do all the processing in Elasticsearch's Ingest pipeline where you probably stitch this together with a Grok processor and then the JSON processor.
Only log a JSON message if you can to make your life simpler; potentially move the log level into the JSON structure.
I'm new to Grails, and I'm stuck on the basic structure of my web-app
So far I've implemented a Grails app that renders one JSON file to a readable table.
example
Given JSON file below
{
"abbreviation": "EXAMPLE",
"guid": "31ac235e2-3ad3-43e3-1fd4-41e6dfwegf03",
"metadata": {
"dataOrigin": "Example"
},
"rooms":
[
],
"site": {
"guid": "31ac235e2-3ad3-43e3-1fd4-41e6dfwegf03"
"title": "Example Testing"
}
}
My app renders above JSON file to below
Abbreviation : Example
GUID : 31ac235e2-3ad3-43e3-1fd4-41e6dfwegf03
Metadata : - DataOrigin : Example
Rooms : []
Site - GUID : 31ac235e2-3ad3-43e3-1fd4-41e6dfwegf03
Title : Exmaple Testing
Now, what should I do if I want my app to read JSON files with different Name/value pairs and renders it similarly to what my app does now?
(I've hard coded the application I have now)
I know this question is very vague, but can anyone give me any directions or insights on how to do this?
I'm afraid if there was a simply answer to this question then half of the web developers would be out of their jobs :-)
Anyway, there's probably several steps that could help in achieving your goal.
JSONSlurper(http://docs.groovy-lang.org/latest/html/gapi/groovy/json/JsonSlurper.html) to read JSON files. Obviously if you go for predefined structures you could use GSON (https://google-gson.googlecode.com/svn/trunk/gson/docs/javadocs/com/google/gson/Gson.html) or similar library.
To display arbitrary data you can use the Grails Fields plugin(https://grails.org/plugin/fields).
I know it's all only pointers.
I would like to extend the data model of a remote database that is available via a web service interface. Data can be requested via HTTP GET and is delivered as JSON (example request). Other formats are supported as well.
// URL of the example request.
http://data.wien.gv.at/daten/wfs?service=WFS&request=GetFeature&version=1.1.0&typeName=ogdwien:BAUMOGD&srsName=EPSG:4326&outputFormat=json&maxfeatures=5
First object of the JSON answer.
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"id": "BAUMOGD.3390628",
"geometry": {
"type": "Point",
"coordinates": [
16.352910973544105,
48.143425569989326
]
},
"geometry_name": "SHAPE",
"properties": {
"BAUMNUMMER": "1022 ",
"GEBIET": "Strassen",
"STRASSE": "Jochen-Rindt-Strasse",
"ART": "Gleditsia triacanthos (Lederhülsenbaum)",
"PFLANZJAHR": 1995,
"STAMMUMFANG": 94,
"KRONENDURCHMESSER": 9,
"BAUMHOEHE": 11
}
},
...
My idea is to extend the data model (e.g. add a text field) on my own server and therefore mirror the database somehow. I stumbled into CouchDB and its document-based architecture which feels suitable to handle those aforementioned JSON objects. Now, I ask for advise on how to replicate the foreign database initially and on a regularly basis.
Do you think CouchDB is a good choice? I also thought about MongoDB. If possible, I would like to avoid building a full Rails backend to setup the replication. What do you recommend?
If the remote database is static (data doesn't change), then it could work. You just have to find a way to iterate all records. Once you figured that out, the rest is simple as a pie: 1) query data; 2) store the response in a local database; 3) modify as you see fit.
If remote data changes, you'll have many troubles going this way (you'll have to re-sync in the same fashion every once in a while). What I'd do instead is create a local database with only new fields and a reference to the original piece of data. That is, when you request data from remote service, you also look if you have something in the local db and merge those two before processing the final result.