How to get the numeric id of an graph in igraph? - igraph

I have a small network "g" and I want to know the numeric ID associated to each edge of this graph, how can I do it?
g<-graph_from_literal(A---B,B----C,E--F,G---H,D--H)
It suppose that each edge have a numeric ID, in this case, A=1,B=2,C=3,E=4,F=5,G=6,H=7,D=8
And it is easy to know cause is a small network, but, in the case with a large network, how do know the numeric id of each edge or a pair of nodes in specific?

Getting the vertex ID from its name:
> which(V(g)$name == "C")
[1] 3
Getting the edge ID from the endpoints of an edge:
> get.edge.ids(g, c("B", "C"))
[1] 2

Related

TomTom OpenLR Binary Version 3, Differences and mistakes of the OpenLR Whitepaper and the TomTom Demo Tool

I am currently trying to decode OpenLR binary Strings (version 3) as specified in the OpenLR Whitepaper (version 1.5, rev 2) which can be found at the OpenLR Association website.
Unfortunately, while comparing the results with the TomTom Demo Tool.
I had to find some key differences, mistakes, and missing information.
For example, decoding this binary String with the Demo Tool (enable show decoding steps)
Cwl3syVRnAELHgor/6YBBw0DgP61AQwnDGz94AEIGQe1/j4BCj0TSv9NAXYZJw==
shows that:
Negative byte values for relative coordinates have to be flipped and 1 added to obtain the actual value. The OpenLR Whitepaper is missing this step.
is this step also necessary for absolute coordinates or only for relative coordinates?
The calculation of relative coordinates is described to be the same calculation for absolute coordinates
(int - (sgn(int) *0.5)) * 360 / 2^resolution
(resolution = 24 for absolute, 16 for relative), but it gets obvious that this equation does not lead to the correct value.
The calculation would lead to a value of -0.49 not -0.0009 using the values and formula as shown. Instead, for relative coords, the (eventually flipped and 1 added) concatenated byte value has to be divided by 10^5 to obtain -0.0009.
For Location Reference Points n>0,n<2 (the second LRP, thus first relative coordinate) the final addition of the values is somehow correct, but for n >=2 the differences get bigger, and thus the result is wrong, it can be easily seen in the demo tools calculation:
the addition of the correct decoded byte values is simply wrong:
This leads to big differences in the final location. The resulting value of the demo tool is correct, as the locations described follow the streets while using the correct sum would shift them off the street. But the equation is missing some key aspects.
Also, the OpenLR Whitepaper describes adding the relative coordinate value to the previous LRP.
(comparing the values used in the demo tool shows that the first LRP is beeing used instead of the previous LRP)
Which formula is the correct one? The Demo Tool generated correct values but uses wrong calculations.
Edit, for the third LRP I found using the previous LRP leads to the value calculated by the online tool (which shows the first LRP value to be used).
For reference and comparison some examples:
Binary String of above example:
Cwl3syVRnAELHgor/6YBBw0DgP61AQwnDGz94AEIGQe1/j4BCj0TSv9NAXYZJw==
Differences:
Using the correct sum of relative coord value and LRP 0 (first 2 LRPs are correct, then it gets worse, can also be checked by verifying the sum of the demo tool for LRPs 3-6):
The demo tool uses a wrong calculation but the final values shown are correct as they follow the street. It seems to be mirrored along a horizontal line going through the second LRP (first relative coordinate):
I'd be very thankful for any hints on how to solve this correctly.
Steps done:
I wrote a decoder according to the whitepaper and contacted TomTom Support who asked me to discuss this issue here. I double-checked the calculations and found mistakes in the demo tool as well as in the OpenLR white paper specification.
I solved it.
The calculation for relative Coords:
If the first bit of the concatinated bytes (of lat or lon) is 1 (negative value):
byteValueRel = bytevalue - 2^16
In any case divide it by 10^5:
byteValueRel = bytevalue/(10^5)
The resulting relative coordinate is the sum of the previous LRP value and the calculated relative value:
previousLrpValue + byteValueRel

Trying to contour a CSV file in QGIS

I have rainfall data which I have imported as a csv file. It's 185 lines like this:
Name, Longitude, Latitude, Elevation, TotalPrecipitation
BURLINGTON, -72.932, 41.794, 505, M
BAKERSVILLE, -73.008, 41.842, 686, 42.40
BARKHAMSTED, -72.964, 41.921, 710, M
NORFOLK 2 SW, -73.221, 41.973, 1340, 44.22
Looking at the layer properties the latitude and longitude are brought in as "double" but the rainfall amounts come in as "text" so I can't contour them.
How can I get beyond this point and where do I go to do the contouring? Do I go to Vector:Contour? Will it understand M is missing data or will the Ms still exist if this is converted to "double?"
I'm a little confused. Thanks for the help.
I think I might have the idea of help.
Since you have the sort of points located randomly across some area you could do as follows:
Load CSV to your QGIS in order to set the point layer with an attribute table including your most important value, which is Total Precipitation. Let's call it the TEST layer
Processing Toolbox -> TIN Interpolation -> Select the TEST layer. As an Interpolation attribute choose "Total precipitation". Use the green "+" symbol for adding this selection. Don't forget about the Extent option, where you could define the bounds of your interpolation. Preferably I wouldn't exceed the layer I am working on. Output raster size is also important - avoid a small number of rows. Put them about 10 optionally in order to make your interpolation efficient.
https://www.qgistutorials.com/en/docs/3/interpolating_point_data.html
Main bar -> Raster -> Extraction -> Contour
In the input layer select TEST, Interval contours between lines can be 10 (10mm in your case), Attribute name - put PRECIPITATION -> click Run
Your precipitation lines are ready! Now, you can Right-Click -> Properties -> Symbology (change color) or _>Labels (provide labels based on your attribute column Total Precipitation).

How does rstan store posterior samples for separate chains?

I would like to understand how the output of extract in rstan orders the posterior samples. I understand that I can view the posterior samples from each chain by using as.array,
stanfit <- sampling(
model,
data = stan.data)
​
fitarray <- as.array(stanfit)
For example, fitarray[, 2, 1] will give me the samples for the second chain of the first parameter. One way to store the posterior samples in the output of extract would be just to concatenate them. When I do,
fit <- extract(stanfit)
mean(fitarray[,2,1]) == mean(fit$ss[1001:2000])
for several chains and parameters I always get TRUE (ss is the first parameter). This makes it seem like the posterior samples are being concatenated in fit. However, when I do,
fitarray[,2,1] == fit$ss[1001:2000]
I get FALSE (confirmed that there's not just precision difference). It appears that fitarray and fit are storing the iterations differently. How do I view the iterations (in order) of each posterior sample chain separately?
As can be seen from rstan:::as.array.stanfit, the as.array method is essentially defined as
extract(x, permuted = FALSE, inc_warmup = FALSE)
Your default use of extract keeps the warmup and permutes the post-warmup draws randomly, which is why the indices do not line up with the as.array output.

roundings with Access

With Microsoft Access 2010, I have two Single fields:
A = 1.1
B = 2.1
I create a query where I have defined C=A*B
Microsoft Access says that C = 2.30999994277954
but, in reality, C =2.31
How can I get the right result (2.31)?
Slightly off results from operations performed on decimal values can happen if your numeric field size is single or double rather than decimal. Single and double (or floating point) numbers are very close approximations of the "true" numbers, but should not be relied upon if accuracy in operations is required. A related stackoverflow question has more information about this issue: Access comparing floating-point numbers "incorrectly"
If it's possible to modify the underlying table's design, you should change the field size property for the "A" and "B" fields from single to decimal. After changing the field size BUT BEFORE saving the table, you will also need to adjust the Scale property for "A" and "B" from 0 to whatever number of places to the right of the decimal point you might require. You will likely still have a notice about losing data, but if you adjust the field properties correctly before saving the table, this shouldn't be a problem. You should probably make a copy of the table before doing this so that you can verify that there was no data loss. After saving your table and verifying the changes did not result in data loss, your query should represent A * B accurately.

Using google maps API to find average speed at a location

I am trying to get the current traffic conditions at a particular location. The GTrafficOverlay object mentioned here only provides an overlay on an existing map.
Does anyone know how I can get this data from Google using their API?
It is only theorical, but there is perhaps a way to extract those data using the distancematrix api.
Method
1)
Make a topological road network, with node and edge, something like this:
Each edge will have four attributes: [EDGE_NUMBER;EDGE_SPEED;EDGE_TIME,EDGE_LENGTH]
You can use the openstreetmap data to create this network.
At the begining each edge will have the same road speed, for example 50km/h.
You need to use only the drivelink and delete the other edges. Take also into account that some roads are oneway.
2)
Randomly chose two nodes that are not closer than 5 or 10km
Use the dijsktra shortest path algorithm to calculate the shortest path between this two nodes (the cost = EDGE_TIME). Use your topological network to do that. The output will look like:
NODE = [NODE_23,NODE_44] PATH = [EDGE_3,EDGE_130,EDGE_49,EDGE_39]
Calculate the time needed to drive between the two nodes with the distance matrix api.
Preallocate a matrix A of size N X number_of_edge filled with zero value
Preallocate a matrix B of size 1 X number_of_edge filled with zero value
In the first row of matrix A fill each column (corresponding to each edge) with the length of the edge if the corresponding edge is in the path.
[col_1,col_2,col_3,...,col_39,...,col_49,...,col_130]
[0, 0, len_3,...,len_39,...,len_49,...,len_130] %row 1
In the first row of matrix B put the time calculated with the distance matrix api.
Then select two news node that were not used in the first path and repeat the operation until that there is no node left. (so you will fill the row 2, the row 3...)
Now you can solve the linear equation system: Ax = B where speed = 1/x
Assign the new calculated speed to each edge.
3)
Iterate the point 2) until your calculated speed start to converge.
Comment
I'm not sure that the calculated speed will converge, it will be interesting to test the method.I will try to do that if I got some time.
The distance matrix api don't provide a traveling time more precise than 1 minute, that's why the distance between the pair of node need to be at least 5 or 10 or more km.
Also this method fails to respect the Google's terms of service.
Google does not make available public API for this data.
Yahoo has a feed (example) with traffic conditions -- construction, accidents, and such. A write-up on how to access it is here.
If you want actual road speeds, you will probably need to work with a commercial provider.