Using topoJSON is it possible to take two properties from an input shapefile and combine them into a single property on the output topoJSON file?
For example if the feature on the original shapefile has the properties 'constituency':'34' and 'ward':'90' is it possible to concatenate these into a single id property in the output JSON file 'id':'3490'?
If not, can anyone suggest an elegant way to achieve this?
Yes! This is now possible.
As of this commit -p id=constituency+""+ward will concatenate constituency and ward properties on the input file into an id property on the output file. The "" between constituency and ward coerces to strings ensuring javascript doesn't simply add two integers i.e. 30+24 gives 54 30+""+24 gives 3024.
Related
Is it possible to restrict values, or property names in the schema in accordance with data defined in another json (non-schema, just data file) file? Or even take files from a folder and process their names?
For example, YAML:
file 1:
Attributes:
- Attribute1
- Attribute2
file2:
Influence:
Attribute1: 1
Attribute2: -3
I want to have syntax help in the second file that depends on the data defined in the first file. How can I do it?
And harder case
there is a folder with some YAMLs/JSONs describe some events.
like:
Events/event1.yaml
Events/subfolder/event2.yaml
Another file should use only file names defined in the folder
For example:
DefaultEvents:
- event1
- event2
Is it possible and how to get autocomplete with JSON Schema in such a case?
It's not about validation, I need syntax help, autocomplete during making such files.
The only possibility I found is to add all possible values to JsonSchema dynamically with any programming language you use.
This solution will be sufficient when JsonSchema is stored locally in your project.
What would be the best way of visualizing images saved in .csv format?
The following doesn't work:
using CSV, ImageView
data = CSV.read("myfile.csv");
imshow(data)
This is the error:
MethodError: no method matching pixelspacing(::DataFrames.DataFrame)
Closest candidates are:
pixelspacing(!Matched::MappedArrays.AbstractMultiMappedArray) at /Users/xxx/.julia/packages/ImageCore/yKxN6/src/traits.jl:63
pixelspacing(!Matched::MappedArrays.AbstractMappedArray) at /Users/xxx/.julia/packages/ImageCore/yKxN6/src/traits.jl:62
pixelspacing(!Matched::OffsetArrays.OffsetArray) at /Users/xxx/.julia/packages/ImageCore/yKxN6/src/traits.jl:67
...
Stacktrace:
[1] imshow(::Any, ::Reactive.Signal{GtkReactive.ZoomRegion{RoundingIntegers.RInt64}}, ::ImageView.SliceData, ::Any; name::Any, aspect::Any) at /Users/xxx/.julia/packages/ImageView/sCn9Q/src/ImageView.jl:269
[2] imshow(::Any; axes::Any, name::Any, aspect::Any) at /Users/xxx.julia/packages/ImageView/sCn9Q/src/ImageView.jl:260
[3] imshow(::Any) at /Users/xxx/.julia/packages/ImageView/sCn9Q/src/ImageView.jl:259
[4] top-level scope at In[5]:2
[5] include_string(::Function, ::Module, ::String, ::String) at ./loading.jl:1091
Reference on github.
This question was answered at https://github.com/JuliaImages/ImageView.jl/issues/241. Copying the answer here:
imshow(Matrix(data))
where data is your DataFrame. But CSV is a poor choice for images; Netbpm if you simply must use text-formatted images, otherwise binary would be recommended. Binary Netpbm are especially easy to write, if you have to write your own (e.g., if the images are coming from some language that doesn't support other file formats), otherwise PNG is typically a good choice.
Does the CSV file have a header line of names for its columns or is it just a delimited file full of text number values?
If the CSV file is actually in the form of a matrix of values, such that the values are the bytes of a 2D image, you may use DelimitedFiles -- see readdlm() docs. Read the file with readdlm() into a matrix and see if ImageView can display the results.
I find it very hard to use regular expressions directly in the search bar to extract fields. Another problem is that I do not have the permission to share my extracted fields (extracted by the field extractor and stated in field extractions) with other people. I am now looking for another way to extract fields directly in the search bar. Is there something like this possible in Splunk?
Thanks!
Regular expressions aren't so bad once you've had some practice. Think of them as another programming language to know.
There are other ways to extract fields, but most are less efficient and all are less flexible.
The spath and xpath commands will extract fields from JSON and XML, respectively.
multikv extracts fields from table-formatted data (like from top).
The extract command can be used to parse key/value pairs into fields.
The eval command can be used in combination with various functions to parse events into fields.
I want to delete/ignore the elements in the following json record:
{"_scroll_id":"==","timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":6908915,"max_score":null,"hits":[{"_index":"abc_v1","_type":"composite_request_response_v1","_id":"123","_score":1.0,"_source":{"response":{"testResults":{"docsisResults":{"devices":[{"upstreamSection":{"upstreams":[]},"fluxSection":{"fluxInfo":[{}]}}],"events":[]},"mocaResults":{"statuses":[]}}}},"sort":[null,1.0]}]}},
I have the records in the above format. I wish to delete the highlighted part of the record. Can someone guide me of ways I can accomplish that. Are there anyways I can achieve that using hive/pig/linux/python?
There is the JSON SerDe in Hive, see this: https://cwiki.apache.org/confluence/display/Hive/Json+SerDe
So you can define only columns that you need in table definition, put your file in the table location and then select only defined columns. Alternatively you can pre-process/transform your files before loading them using Java+ Jackson (library to serialize or map Java objects to JSON and vice versa), this will give you maximum flexibility thought this is not so simple as using JSON SerDe.
I have a doubt. I do know that Logstash allows us to input csv/log files and filter it using separators and columns. And it will output into elasticsearch for it to be used by Kibana. However, after writing the conf file, do I need to specify index pattern by using the command:
CURL -XPUT 'http://localhost:5601/test' d
Because I do know that when you have a JSON file, you will have to define the mapping etc. Do I need to do this step for csv files and other non json files? Sorry for asking, I need to clear my doubt.
When you insert documents into a new elasticsearch index, a mapping is created for you. This may not be a good thing, as it's based on the initial value of each field. Imagine a field that normally contains a string, but the initial document contains an integer - now your mapping is wrong. This is a good case for creating a mapping.
If you insert documents through logstash into an index named logstash-YYYY-MM-DD (the default), logstash will apply its own mapping. It will use any pattern hints you gave it in grok{}, e.g.:
%{NUMBER:bytes:int}
and it will also make a "raw" (not analyzed) version of each string, which you can access as myField.raw. This may also not be what you want, but you can make your own mapping and provide it as an argument in the elasticsearch{} output stanza.
You can also make templates, which elasticsearch will apply when an index pattern matches the template definition.
So, you only need to create a mapping if you don't like the default behaviors of elasticsearch or logstash.
Hope that helps.