Flatten JSON like data in Postgres 11 Database - json

I have the following data in a Postgres table that I need to flatten out:
data
===============================================================================================
{"Exterior Lights" :["Headlights - Forward Adaptive","Headlights - Laser","Headlights - LED"]}
{"Generic" : ["Launch Control"]}
{"Mirror" :["Blind Spot Assistant","Door Mirrors - Integrated LED"]}
{"Safety" :["Tyre Pressure Monitoring", "ABS"]}
Ideally the above data it would look like this afterwards:
System Type
======= ====
Exterior Lights Headlights - Forward Adaptive
Exterior Lights Headlights - Laser
Exterior Lights headlights - LED
Generic Launch Control
Mirror Blind Spot Assistant
Mirror Door Mirror - Integrated LED
Safety Tyre Pressure Monitor
Safety ABS
I've tried various combinations of JSON operators but to no avail. Can anyone help?

You need to first extract the key/value pairs as rows which can be done using jsonb_each(). Then you can use jsonb_array_elements_text() to create a row for each array element:
select x.system, u.type
from the_table t
cross join jsonb_each(t.data) as x(system, value)
cross join jsonb_array_elements_text(x.value) as u(type);
Online example

Related

make continuous data to act as discrete data

I am trying to make my discrete data continuous data. I am comparing the mass of eggs versus the treatment of moms. The reason I want to make the egg mass discrete is because I did not get many decimal places within the mass and therefore it is acting as categorical data (yet of course the model won't read it as such). I want to use glmer because I have random effects to account for. Any help would be greatly appreciated! Below is the model I am trying to run but egg mass (e.mass) is being flagged as a non-integer.
Below is the model I am trying to run.
model.1 <- glmer(e.mass ~ m.treat+ (1|year/m.id), family=poisson, na.action = na.exclude, data = data)
I tried doing "data$e.masscharactor <- as.character(data$e.mass)" but that did not work (nor did replacing 'character' with factor').

U-Net segmentation without having mask

I am new to deep learning and Semantic segmentation.
I have a dataset of medical images (CT) in Dicom format, in which I need to segment tumours and organs involved from the images. I have labelled organs contoured by our physician which we call it RT structure stored in Dicom format also.
As far as I know, people usually use "mask". Does it mean I need to convert all the contoured structure in the rt structure to mask? or I can use the information from the RT structure (.dcm) directly as my input?
Thanks for your help.
There is a special library called pydicom that you need to install before you can actually decode and later visualise the X-ray image.
Now, since you want to apply semantic segmentation and you want to segment the tumours, the solution to this is to create a neural network which accepts as input a pair of [image,mask], where, say, all the locations in the mask are 0 except for the zones where the tumour is, which are marked with 1; practically your ground truth is the mask.
Of course for this you will have to implement your CustomDataGenerator() which must yield at every step a batch of [image,mask] pairs as stated above.

MySQL Geometry Type for 3 coordinate values

I am seeking information on how to create a MySQL Geometry type that accepts 3 coordinate values: Northing, Easting, and Elevation.
I can create a POINT data type (with SRID) and populate it with:
ST_SRID(POINT(Northing, Easting), SRID_value_Here)
but a POINT only accepts 2 arguments (based on my understanding).
The Elevation value I need to store is significant as these points will be linked to an AutoCad drawing and stored in a MySQL database.
I guess I am unclear on what Geospatial data type to use as I don't necessarily feel this is a Polygon since it is just a single point entity with 3 coordinate values and not 3 separate points that could close in an form a polygon.
While I am familiar with SQL, Geospatial is a new and opening world to me so any help and pointers are much appreciated.
Thank you.

Machine Learning for gesture recognition with Myo Armband

I'm trying to develop a model to recognize new gestures with the Myo Armband. (It's an armband that possesses 8 electrical sensors and can recognize 5 hand gestures). I'd like to record the sensors' raw data for a new gesture and feed it to a model so it can recognize it.
I'm new to machine/deep learning and I'm using CNTK. I'm wondering what would be the best way to do it.
I'm struggling to understand how to create the trainer. The input data looks like something like that I'm thinking about using 20 sets of these 8 values (they're between -127 and 127). So one label is the output of 20 sets of values.
I don't really know how to do that, I've seen tutorials where images are linked with their label but it's not the same idea. And even after the training is done, how can I avoid the model to recognize this one gesture whatever I do since it's the only one it's been trained for.
An easy way to get you started would be to create 161 columns (8 columns for each of the 20 time steps + the designated label). You would rearrange the columns like
emg1_t01, emg2_t01, emg3_t01, ..., emg8_t20, gesture_id
This will give you the right 2D format to use different algorithms in sklearn as well as a feed forward neural network in CNTK. You would use the first 160 columns to predict the 161th one.
Once you have that working you can model your data to better represent the natural time series order it contains. You would move away from a 2D shape and instead create a 3D array to represent your data.
The first axis shows the number of samples
The second axis shows the number of time steps (20)
The thirst axis shows the number of sensors (8)
With this shape you're all set to use a 1D convolutional model (CNN) in CNTK that traverses the time axis to learn local patterns from one step to the next.
You might also want to look into RNNs which are often used to work with time series data. However, RNNs are sometimes hard to train and a recent paper suggests that CNNs should be the natural starting point to work with sequence data.

2d creature images hybridization algorithm

I would like to mutate two monsters in my game to make a simple hybrid. Each monster is a 2d image witch i could implement as composite sprites (to know more about each body parts). The problem is that not all monster is similar types, not all of them is humanoid or any kind of animal. I think for example if we have a lion with 4 legs and the spider with 8 (as example spider gens dominate) it could be the 8 legs lion with other (hybrid between two) color. But if i would have some humanoid and frog what should algorithm do? Any idea or any useful algorithm that could help me?
You could implement simple genetic system. E.g. have each creature represented by array of values instructing your script what to do in order to create the character: [add_red_human_torso, add_blue_frog_left_leg, .... ]. Then you could take random mix of two creatures arrays and build new creature from that.
How about giving attributes to each of the animals limbs, (Strength for lions legs, jumping for frogs legs, height for human torso, camoflauge for spider color,etc.) and then do a multi objective optimization algorithm.