How to build vtkPolyData based on the information within a txt file - tcl

I have a txt file which contains a set of 3 Dimensional data points and I would like to create a vtkPolyData based on those points.
In the file, I have the number of points on the first line, in my case they are 6 x 6. And after that the actual coordinates of each point. The content of the file is like this.
6 6
1 1 3
2 1 3.4
3 1 3.6
4 1 3.6
5 1 3.4
6 1 3
1 2 3
2 2 3.8
3 2 4.2
4 2 4.2
5 2 3.8
6 2 3
1 3 3
2 3 3
3 3 3
4 3 3
5 3 3
6 3 3
1 4 3
2 4 3
3 4 3
4 4 3
5 4 3
6 4 3
1 5 3
2 5 3.8
3 5 4.2
4 5 4.2
5 5 3.8
6 5 3
1 6 3
2 6 3.4
3 6 3.6
4 6 3.6
5 6 3.4
6 6 3
How can I build a vtkPolyData structure with a txt file with this data?

It looks to me like you have a regularly gridded series of points, right? If so, vtkImageData might be a better choice. You can always use a geometry filter afterwards to convert to polydata if you really need it that way.
Create a vtkImageData instance.
Set its dimensions to (6, 6, 1) (the third dimension is ignored).
Set its data type to an appropriate type (float or double, I guess).
Call AllocateScalars();
If in C++,
call GetScalarPointer() and cast it to the data type set in 3.
This pointer will point to an array of size 36. You can just fill each point as you would normally.
If in another language (TCL/Python/Java), call SetScalarComponentFromFloat on the image data, with the arguments (x, y, 0, 0, value). The first 0 is the 3rd dimension and the second is for the first component.
This will give you a grid, and it'll be far more memory efficient than a polydata.
If you want to visualize only the points, use a vtkDataSetMapper, and setup the actor's property with SetRepresentationToPoints(), setting an appropriate point size. That will do a simple job of visualization.

Are these examples useful? In particular, this does generation of points and polygons, so it should be possible to adapt. The core seems to be (with lots left out):
# ...
vtkPolyData shell
vtkFloatPoints points
vtkCellArray strips
# Generate points...
loop {
...
points InsertPoint $k $x0 $x1 $x2
}
shell SetPoints points
points Delete
# Generate triangles/polygons...
loop {
strips InsertNextCell $NP2
# ...
strips InsertCellPoint [expr $kb +$ke ]
# ...
strips InsertCellPoint [expr $kb +$ke ]
}
shell SetStrips strips
strips Delete
# ...

Related

How to create complex JSON config maps in q?

Is there a good way in q to input somewhat large complicated nested dictionaries which represent/will be converted to json? I'm trying to control the echarts javascript library which basically just renders charts based on json config options. What I'm doing now is:
opt.title.text:"my chart"
opt.xAxis.data:til 100
opt.series.data:100?5
opt.series.type:`line
toClient[opt] /serializes and sends to browser
but is there an obvious way to get rid of the intermediate assignment? Is making a function to take key-path/value pairs and turn them into a dictionary the way to go or is there a better way to go about this?
Or is this something that should be avoided in q, and instead just manually set write q to set specific options and handle the json object map in the javascript client?
Not sure if this is really what you are looking for, but you can create the nested dictionary structure directly if that's what you're after?
q)`title`xAxis`series!(enlist[`text]!enlist"my chart";enlist[`data]!enlist til 100;`data`type!(100?5;`line))
title | (,`text)!,"my chart"
xAxis | (,`data)!,0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 ..
series| `data`type!(0 1 1 3 3 3 2 2 4 1 3 3 1 4 0 4 4 4 2 4 3 3 4 0 4 0 0 1 0..

GNUPLOT: Joining different series of points with vectors

I have a file with data in 2 columns X and Y. There are some blocks and they are separated by a blank line. I want to join the points (given by their coordenates x and y in the file) in each block using vectors. I'm trying to use these functions:
prev_x = NaN
prev_y = NaN
dx(x) = (x_delta = x-prev_x, prev_x = ($0 > 0 ? x : 1/0), x_delta)
dy(y) = (y_delta = y-prev_y, prev_y = ($0 > 0 ? y : 1/0), y_delta)
which I've taken from Plot lines and vector in graphical gnuplot (first answer). The command to plot would be plot for[i=0:5] 'Field_lines.txt' every :::i::i u (prev_x):(prev_y):(dx($1)):(dy($2)) with vectors. The output is
and the problem is that the point (0,0) is being included even though it's not in the file. I don't think I understand what the functions dx and dy do exactly and how they are being used with the option using (prev_x):(prev_y):(dx($1)):(dy($2)) so an explanation of this would help me a lot to try to fix this.
This is the file:
#1
0 5
0 4
0 3
0.4 2
0.8 1
0.8 1
#2
2 5
2 4
2 3
2 2
2 1
2 0
#3
4 5
4.2 4
4.5 3
4.6 2
4.7 1
4.7 0
#4
7 5
7.2 4
7.5 3
7.9 2
7.9 1
7.9 0
#5
9 5
9 4
9.2 3
9.5 2
9.5 1
9.5 0
#6
11 7
12 6
13 5
13.3 4
13.5 3
13.5 2
13.6 1
14 0
Thanks!
I'm not completely sure, what the real problem is, but I think you cannot rely on the columns in the using statement to be evaluated from left to right, and your check $0 > 0 in the dx and dy some too late in my opinion.
I usually put all the assignments and conditionals in the first column, and that works fine also in your case:
set offsets 1,1,1,1
unset key
prev_x = prev_y = 1
plot for [i=0:5] 'Field_lines.txt' every :::i::i \
u (x_delta = prev_x-$1, prev_x=$1, y_delta=prev_y-$2, prev_y=$2, ($0 == 0 ? 1/0 : prev_x)):(prev_y):(x_delta):(y_delta) with vectors backhead
Also, to draw a vector from j-th row to the point in the following row you must invert the definition of x_delta and use backhead to draw the vectors in the correct direction

How to find all the triples in a graph?

The graph consists of more than three million nodes and more than 20 million edges. I'm using igraph package on a 8G RAM linux server. The code is
cliques(g,min=3,max=3)
After six days passed, the code is still running. Is there a better way to find all the triples in a graph?
Following #GaborCsardi suggestion you can see this simple example (I used http://igraph.org/nightly igraph dev version)
kite <- graph.famous("Krackhardt_Kite")
triangles(kite)
which yields:
[1] 4 1 2 4 1 3 4 2 5 4 6 1 4 6 3 4 6 7 4 7 2 4 7 5 6 1 3 6 7 8 7 2 5
for the (undirected) graph "Krackhardt_Kite"
You can compare the results with
plot(kite)
Hope this helps

Pandas: flattening repeating/wrapped columns in csv file

It often happens that data will be given to you with wrapped columns. Consider, for example:
CCY Decimals CCY Decimals CCY Decimals
AUD/CAD 5 EUR/CZK 4 GBP/NOK 5
AUD/CHF 5 EUR/DKK 5 GBP/NZD 5
AUD/DKK 5 EUR/GBP 5 GBP/PLN 5
AUD/JPY 3 EUR/HKD 5 GBP/SEK 5
AUD/NOK 5 EUR/HUF 3 GBP/SGD 5
...
Which should be parsed as a dataframe of two columns (CCY and Decimals), not six. My question is, what is the most idiomatic way of achieving this?
I would have wanted something like the following:
data = pd.read_csv("file.csv")
data.groupby(axis=1,by=data.columns.map(lambda s: s.replace("\..",""))).\
apply(lambda df : df.values.flatten())
When reading the csv file we end up with columns CCY,Decimals,CCY.1,Decimals.1 .. etc. The groupby operation returns a collection of data frames:
<pandas.core.groupby.DataFrameGroupBy object at 0x3a52b10>
Which we would then flatten using numpy functionality. So we would are converting DataFrames with repeating columns into Series, and then merging these into a result DF.
However, this doesn't work. I've tried passing the different keys arguments to groupBy, but it always complains about being unable to reindex non-unique columns.
There are a number of existing questions that deal with flattening groups of columns (e.g. "Flattening" output of group.nth in Pandas), but I can't find any that do this for repeating columns.
To use groupby, I'd do:
>>> groups = df.groupby(axis=1,by=lambda x: x.rsplit(".",1)[0])
>>> pd.DataFrame({k: v.values.flat for k,v in groups})
CCY Decimals
0 AUD/CAD 5
1 EUR/CZK 4
2 GBP/NOK 5
3 AUD/CHF 5
4 EUR/DKK 5
5 GBP/NZD 5
6 AUD/DKK 5
7 EUR/GBP 5
8 GBP/PLN 5
9 AUD/JPY 3
10 EUR/HKD 5
11 GBP/SEK 5
12 AUD/NOK 5
13 EUR/HUF 3
14 GBP/SGD 5
[15 rows x 2 columns]
and then sort.

How to apply a formula for removing data noise in R?

I am working on NGSim Traffic data, having 18 columns and 1180598 rows in a text file. I want to smooth the position data, in the column 'Local Y'. I know there are built-in functions for data smoothing in R but none of them seem to match with the formula I am required to apply. The data in text file looks something like this:
Index VehicleID Total_Frames Local Y
1 2 5 35.381
2 2 5 39.381
3 2 5 43.381
4 2 5 47.38
5 2 5 51.381
6 4 8 504.828
7 4 8 508.325
8 4 8 512.841
9 4 8 516.338
10 4 8 520.854
11 4 8 524.592
12 4 8 528.682
13 4 8 532.901
14 5 7 39.154
15 5 7 43.153
16 5 7 47.154
17 5 7 51.154
18 5 7 55.153
19 5 7 59.154
20 5 7 63.154
The above data columns are just example taken out of original file. Here you can see 3 vehicles, with vehicle IDs = 2, 4 and 5 but in fact there are 2169 vehicles with different IDS. The column Total_Frames tell us how many times vehicle Id of each vehicle is repeated in the first column, for example in the table above, vehicle ID 2 is repeated 5 times, hence '5' in Total_Frames column. Following is the formula I am required to apply to remove data noise (smoothing) from column 'Local Y':
Smoothed Position Value = (1/(Summation of [EXP^-abs(i-k)/delta] from k=i-D to i+D)) * ( (Summation of (Local Y) *[EXP^-abs(i-k)/delta] from k=i-D to i+D))
where,
i = index #
delta = 5
D = 15
I have tried using the built-in functions, which I know of, but they don't smooth the data as required. My question is: Is there any built-in function in R which can do the data smoothing in the way of given formula or which could take this formula as an argument? I need to apply the formula to every value in Local Y which has 15 values before and 15 values after them (i-D and i+D) for same vehicle Id. Can anyone give me any idea how to approach the problem? Thanks in advance.
You can place your formula in a function and then use the apply function of R to apply it to the elements in your "Local Y" column of the dataframe

Categories