Random graph generator - igraph

I am interested in generating weighted, directed random graphs with node constraints. Is there a graph generator in R or Python that is customizable? The only one I am aware of is igraph's erdos.renyi.game() but I am unsure if one can customize it.
Edit: the customizations I want to make are 1) drawing a weighted graph and 2) constraining some nodes from drawing edges.

In igraph python, you can use link the Erdos_Renyi class.
For constraining some nodes from drawing edges, this is controlled by the p value.
Erdos_Renyi(n, p, m, directed=False, loops=False) #these are the defaults
Example:
from igraph import *
g = Graph.Erdos_Renyi(10,0.1,directed=True)
plot(g)
By setting the p=0.1 you can see that some nodes do not have edges.
For the weights you can do something like:
g.ecount() # to find the number of edges
g.es["weights"] = range(1, g.ecount())
g.es["label"] = weights
plot(g)
Result:

Related

Interpretation of yolov5 output

I am making a face mask detection project and I trained my model using ultralytics/yolov5.I saved the trained model as an onnx file, you can find the model file here model.onnx. Now I want you use this model.onnx with opencv to detect real time face mask. The input image size during training was 320*320. You can visualize this model using netron.
I have written this code to capture the image using webcam and pass it to model.onnx to predict my bounding boxes. The code is as follows:
def predict(img):
session = onnxruntime.InferenceSession(model_path)
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
img = img.reshape((1,3,320,320))
data = json.dumps({'data':img.tolist()})
data = np.array(json.loads(data)['data']).astype('float32')
result = session.run([output_name],{input_name:data})
result = np.array(result)
print(result.shape)
The output of result.shape is (1, 1, 3, 40, 40, 85)
Can anyone help me in interpreting this shape and how can i use this result array to predict my class, bounding box and confidence.
I've never worked with a pure yolov5 model, but here's the output format for yolov5s. It looks like it should be similar.
ouput tensor structure (yolov5s):
output_tensor[a, b, c, d]
a -> image index (If you're input is a batch of images, this tells you which image's output you're looking at. If your input is just one image, leave this as 0.)
b -> index of image in batch
c -> information about bounding box
0, 1 -> x and y coordinate of bounding box center
2, 3 -> width and height of bounding box
4 -> bounding box confidence
5 - 85 -> single class confidences
d -> index of proposed bounding boxes

Freefem++: Solving poisson equation with numerical function

I am using Freefem++ to solve the poisson equation
Grad^2 u(x,y,z) = -f(x,y,z)
It works well when I have an analytical expression for f, but now I have an f numerically defined (i.e. a set of data defined on a mesh) and I am wondering if I can still use Freefem++.
I.e. typical code (for a 2D problem in this case), looks like the following
mesh Sh= square(10,10); // mesh generation of a square
fespace Vh(Sh,P1); // space of P1 Finite Elements
Vh u,v; // u and v belongs to Vh
func f=cos(x)*y; // analytical function
problem Poisson(u,v)= // Definition of the problem
int2d(Sh)(dx(u)*dx(v)+dy(u)*dy(v)) // bilinear form
-int2d(Sh)(f*v) // linear form
+on(1,2,3,4,u=0); // Dirichlet Conditions
Poisson; // Solve Poisson Equation
plot(u); // Plot the result
I am wondering if I can define f numerically, rather than analytically.
Mesh & space Definition
We define a square unit with Nx=10 mesh and Ny=10 this provides 11 nodes on x axis and the same for y axis.
int Nx=10,Ny=10;
int Lx=1,Ly=1;
mesh Sh= square(Nx,Ny,[Lx*x,Ly*y]); //this is the same as square(10,10)
fespace Vh(Sh,P1); // a space of P1 Finite Elements to use for u definition
Conditions and problem statement
We are not going to use solve but we ll handle matrix (a more sophisticated way to solve with FreeFem).
First we define CL for our problem (Dirichlet ones).
varf CL(u,psi)=on(1,2,3,4,u=0); //you can eliminate border according to your problem state
Vh u=0;u[]=CL(0,Vh);
matrix GD=CL(Vh,Vh);
Then we define the problem. Instead of writing dx(u)*dx(v)+dy(u)*dy(v) I suggest to use macro, so we define div as following but pay attention macro finishes by // NOT ;.
macro div(u) (dx(u[0])+dy(u[1])) //
So Poisson bilinear form becomes:
varf Poisson(u,v)= int2d(Sh)(div(u)*div(v));
After we extract Stifness Matrix
matrix K=Poisson(Vh,Vh);
matrix KD=K+GD; //we add CL defined above
We proceed for solving, UMFPACK is a solver in FreeFem no much attention to this.
set(KD,solver=UMFPACK);
And here what you need. You want to define a value of function f on some specific nodes. I'm going to give you the secret, the poisson linear form.
real[int] b=Poisson(0,Vh);
You define value of the function f at any node you want to do.
b[100]+=20; //for example at node 100 we want that f equals to 20
b[50]+=50; //and at node 50 , f equals to 50
We solve our system.
u[]=KD^-1*b;
Finally we get the plot.
plot(u,wait=1);
I hope this will help you, thanks to my internship supervisor Olivier, he always gives to me tricks specially on FreeFem. I tested it, it works very well. Good luck.
The method by afaf works in the case when the function f is a free-standing one. For the terms like int2d(Sh)(f*u*v), another solution is required. I propose (actually I have red it somewhere in Hecht's manual) an approach that covers both cases. However, it works only for P1 finite elements, for which the degrees of freedom are coincided with the mesh nodes.
fespace Vh(Th,P1);
Vh f;
real[int] pot(Vh.ndof);
for(int i=0;i<Vh.ndof;i++){
pot[i]=something; //assign values or read them from a file
}
f[]=pot;

Get the most probable color from a words set

Are there any libraries existing or methods that let you to figure out the most probable color for a words set? For example, cucumber, apple, grass, it gives me green color. Did anyone work in that direction before?
If i have to do that, i will try to search images based on the words using google image or others and recognize the most common color of top n results.
That sounds like a pretty reasonable NLP problem and one thats very easy to handle via map-reduce.
Identify a list of words and phrases that you call colors ['blue', 'green', 'red', ...].
Go over a large corpus of sentences, and for the sentences that mention a particular color, for every other word in that sentence, note down (word, color_name) in a file. (Map Step)
Then for each word you have seen in your corpus, aggregate all the colors you have seen for it to get something like {'cucumber': {'green': 300, 'yellow': 34, 'blue': 2}, 'tomato': {'red': 900, 'green': 430'}...} (Reduce Step)
Provided you use a large enough corpus (something like wikipedia), and you figure out how to prune really small counts, rare words, you should be able to make pretty comprehensive and robust dictionary mapping millions of the items to their colors.
Another way to do that is to do a text search in google for combinations of colors and the word in question and take the combination with the highest number of results. Here's a quick Python script for that:
import urllib
import json
import itertools
def google_count(q):
query = urllib.urlencode({'q': q})
url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query
search_response = urllib.urlopen(url)
search_results = search_response.read()
results = json.loads(search_results)
data = results['responseData']
return int(data['cursor']['estimatedResultCount'])
colors = ['yellow', 'orange', 'red', 'purple', 'blue', 'green']
# get a list of google search counts
res = [google_count('"%s grass"' % c) for c in colors]
# pair the results with their corresponding colors
res2 = list(itertools.izip(res, colors))
# get the color with the highest score
print "%s is %s" % ('grass', sorted(res2)[-1][1])
This will print:
grass is green
Daniel's and Xi.lin's answers are very good ideas. Along the same axis, we could combine both with an approach similar to Xilin's but more simple: Query Google Image with the word you want to find the color associated with + a "Color" filter (see in the lower left bar). And see which color yields more results.
I would suggest using a tightly defined set of sources if possible such as Wikipedia and Wordnet.
Here, for example, is Wordnet for "panda":
S: (n) giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
(large black-and-white herbivorous mammal of bamboo forests of China and Tibet;
in some classifications considered a member of the bear family or of a separate
family Ailuropodidae)
S: (n) lesser panda, red panda, panda, bear cat, cat bear,
Ailurus fulgens (reddish-brown Old World raccoon-like carnivore;
in some classifications considered unrelated to the giant pandas)
Because of the concise, carefully constructed language it is highly likely that any colour words will be important. Here you can see that pandas are both black-and-white and reddish-brown.
If you identify subsections of Wikipedia (e.g. "Botanical Description") this will help to increase the relevance of your results. Also the first image in Wikipedia is very likely to be the best "definitive" one.
But, as with all statistical methods, you will get false positives (and negatives , though these are probably less of a problem).

Matplotlib/Pyplot: How to zoom subplots together?

I have plots of 3-axis accelerometer time-series data (t,x,y,z) in separate subplots I'd like to zoom together. That is, when I use the "Zoom to Rectangle" tool on one plot, when I release the mouse all 3 plots zoom together.
Previously, I simply plotted all 3 axes on a single plot using different colors. But this is useful only with small amounts of data: I have over 2 million data points, so the last axis plotted obscures the other two. Hence the need for separate subplots.
I know I can capture matplotlib/pyplot mouse events (http://matplotlib.sourceforge.net/users/event_handling.html), and I know I can catch other events (http://matplotlib.sourceforge.net/api/backend_bases_api.html#matplotlib.backend_bases.ResizeEvent), but I don't know how to tell what zoom has been requested on any one subplot, and how to replicate it on the other two subplots.
I suspect I have the all the pieces, and need only that one last precious clue...
-BobC
The easiest way to do this is by using the sharex and/or sharey keywords when creating the axes:
from matplotlib import pyplot as plt
ax1 = plt.subplot(2,1,1)
ax1.plot(...)
ax2 = plt.subplot(2,1,2, sharex=ax1)
ax2.plot(...)
You can also do this with plt.subplots, if that's your style.
fig, ax = plt.subplots(3, 1, sharex=True, sharey=True)
Interactively this works on separate axes
for ax in fig.axes:
ax.set_xlim(0, 50)
fig.draw()

How does this work in computing the depth map?

From this site: http://www.catalinzima.com/?page_id=14
I've always been confused about how the depth map is calculated.
The vertex shader function calculates position as follows:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.TexCoord = input.TexCoord; //pass the texture coordinates further
output.Normal =mul(input.Normal,World); //get normal into world space
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;
return output;
}
What are output.Position.z and output.Position.w? I'm not sure as to the maths behind this.
And in the pixel shader there is this line: output.Depth = input.Depth.x / input.Depth.y;
So output.Depth is output.Position.z / outputPOsition.w? Why do we do this?
Finally in the point light shader (http://www.catalinzima.com/?page_id=55) to convert this output to be a position the code is:
//read depth
float depthVal = tex2D(depthSampler,texCoord).r;
//compute screen-space position
float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;
//transform to world space
position = mul(position, InvertViewProjection);
position /= position.w;
again I don't understand this. I sort of see why we use InvertViewProjection as we multiply by the view projection previously, but the whole z and now w being made to equal 1, after which the whole position is divided by w confuses me quite a bit.
To understand this completely, you'll need to understand how the algebra that underpins 3D transforms works. SO does not really help (or I don't know how to use it) to do matrix math, so it'll have to be without fancy formulaes. Here is some high level explanation though:
If you look closely, you'll notice that all transformations that happen to a vertex position (from model to world to view to clip coordinates) happens to be using 4D vectors. That's right. 4D. Why, when we live in a 3D world ? Because in that 4D representation, all the transformations we usually want to do to vertices are expressible as a matrix multiplication. This is not the case if we stay in 3D representation. And matrix multiplications are what a GPU is good at.
What does a vertex in 3D correspond to in 4D ? This is where it gets interesting. The (x, y, z) point corresponds to the line (a.x, a.y, a.z, a). We can grab any point on this line to do the math we need, and we usually pick the easiest one, a=1 (that way, we don't have to do any multiplication, just set w=1).
So that answers pretty much all the math you're looking at. To project a 3D point in 4D we set w=1, to get back a component from a 4D vector, that we want to compare against our standard sizes in 3D, we have to divide that component by w.
This coordinate system, if you want to dive deeper, is called homogeneous coordinates.