Matplotlib/Pyplot: How to zoom subplots together? - zooming

I have plots of 3-axis accelerometer time-series data (t,x,y,z) in separate subplots I'd like to zoom together. That is, when I use the "Zoom to Rectangle" tool on one plot, when I release the mouse all 3 plots zoom together.
Previously, I simply plotted all 3 axes on a single plot using different colors. But this is useful only with small amounts of data: I have over 2 million data points, so the last axis plotted obscures the other two. Hence the need for separate subplots.
I know I can capture matplotlib/pyplot mouse events (http://matplotlib.sourceforge.net/users/event_handling.html), and I know I can catch other events (http://matplotlib.sourceforge.net/api/backend_bases_api.html#matplotlib.backend_bases.ResizeEvent), but I don't know how to tell what zoom has been requested on any one subplot, and how to replicate it on the other two subplots.
I suspect I have the all the pieces, and need only that one last precious clue...
-BobC

The easiest way to do this is by using the sharex and/or sharey keywords when creating the axes:
from matplotlib import pyplot as plt
ax1 = plt.subplot(2,1,1)
ax1.plot(...)
ax2 = plt.subplot(2,1,2, sharex=ax1)
ax2.plot(...)

You can also do this with plt.subplots, if that's your style.
fig, ax = plt.subplots(3, 1, sharex=True, sharey=True)

Interactively this works on separate axes
for ax in fig.axes:
ax.set_xlim(0, 50)
fig.draw()

Related

Create a DataSet with multiple labels and unknown number of classes in deeplearning4j

What DataSetIterator should I use in order to create a DataSet object that contains miultiple features and labels? I have only seen examples similar to 'Iris example' where there is only one label and it is known how many different labels there are. In my problem there are four labels (position X, position Y, width and height of a shape) and many features (pixels values) and it's impossible to calculate how many different labels there could be.
I want something like this
RecordReader recordReader = new CSVRecordReader(0, ',');
recordReader.initialize(new FileSplit(new File(fileName)));
DataSetIterator iterator = new CustomDataSetIterator(recordReader, numRows, numFeatures, numLables);
DataSet allData = iterator.next();
Using data that looks like this
feature0;feature1;feature2;feature3;label0;label1;
I know that this question seems very basic and it is but I really had hard time finding any information about this topic in official tutorials or in documentation.
it seems like you are looking for an object detection kind of data with bounding boxes an multiple possible objects in your picture.
take a look at this example for that: https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/objectdetection/HouseNumberDetection.java
in general there is a MultiDataSet that can take multiple inputs and can have multiple outputs.

Cut ultrasound signal between specific values using Octave

I have an ultrasound wave (graph axes: Volt vs microsecond) and need to cut the signal/wave between two specific value to further analyze this clipping. My idea is to cut the signal between 0.2 V (y-axis). The wave is sine shaped as shown in the figure with the desired cutoff points in red
In my current code, I'm cutting the signal between 1900 to 4000 ms (x-axis) (Aa = A(1900:4000);) and then I want to make the aforementioned clipping and proceed with the code.
Does anyone know how I could do this y-axis clipping?
Thanks!! :)
clear
clf
pkg load signal
for k=1:2
w=1
filename=strcat("PCB 2.1 (",sprintf("%01d",k),").mat")
load(filename)
Lthisrun=length(A);
Pico(k,1:Lthisrun)=A;
Aa = A(1900:4000);
Ah= abs(hilbert(Aa));
step=100;
hold on
i=1;
Ac=0;
for index=1:step:3601
Ac(i+1)=Ac(i)+Ah(i);
i=i+1
r(k)=trapz(Ac)
end
end
ok, you want to just look at values 'above the noise' in your data. Or, in this case, 'clip out' everything below 0.2V. the easiest way to do this is with logical indexing. You can take an array and create a sub array eliminating everything that doesn't meet a certain logical condition. See this example:
f = #(x) sin(x)./x;
x = [-100:.1:100];
y = f(x);
plot(x,y);
figure;
x_trim = x(y>0.2);
y_trim = y(y>0.2);
plot(x_trim, y_trim);
From your question it looks like you want to do the clipping after applying the horizontal windowing from 1900-4000. (you say that that is in milliseconds, but your image shows the pulse being much sooner than 1900 ms). In any case, something like
Ab = Aa(Aa > 0.2);
will create another array Ab that will only contain the portions of Aa with values above 0.2. You may need to do something similar (see the example) for the horizontal axis if your x-data is not just the element index.

Custom mesh jittering in Mujoco environment in OpenAI gym

I've tried modifying the FetchPickAndPlace-v1 OpenAI environment to replace the cube with a pair of scissors. Everything works perfectly except for the fact that my custom mesh seems to jitter a few millimeters in and out of the table every few time steps. I've included a picture mid-jitter below:
As you can see, the scissors are caught mid-way through the surface of the table. How can I prevent this? All I've done is switch out the code for the cube in pick_and_place.xml with the asset related to the scissor mesh. Here's the code of interest:
<body name="object0" pos="0.0 0.0 0.0">
<joint name="object0:joint" type="free" damping="0.01"></joint>
<geom size="0.025 0.025 0.025" mesh="tool0:scissors" condim="3" name="object0" material="tool_mat" class="tool0:matte" mass="2"></geom>
<site name="object0" pos="0 0 0" size="0.02 0.02 0.02" rgba="1 0 0 1" type="sphere"></site>
</body>
I've tried playing around with the coordinates of the position and geometry but to no avail. Any tips? Replacing mesh="tool0:scissors" with type="box" gets rid of the problem entirely but I'm back to square one.
As suggested by Emo Todorov in the MuJoCo forums:
Replace the ground box with a plane and
use MuJoCo 2.0. The latest version of the collision detector
generates multiple contacts between a mesh and a plane, which
results in more stable simulation. But this only works for
plane-mesh, not for box-mesh.
The better solution is to break the mesh into several meshes, and include them as multiple geoms in the same body. Then MuJoCo will construct the convex hull of each sub-mesh, resulting in multiple contact points (even without the special plane mechanism mentioned above) and furthermore it will be a better approximation to the actual object geometry.

Random graph generator

I am interested in generating weighted, directed random graphs with node constraints. Is there a graph generator in R or Python that is customizable? The only one I am aware of is igraph's erdos.renyi.game() but I am unsure if one can customize it.
Edit: the customizations I want to make are 1) drawing a weighted graph and 2) constraining some nodes from drawing edges.
In igraph python, you can use link the Erdos_Renyi class.
For constraining some nodes from drawing edges, this is controlled by the p value.
Erdos_Renyi(n, p, m, directed=False, loops=False) #these are the defaults
Example:
from igraph import *
g = Graph.Erdos_Renyi(10,0.1,directed=True)
plot(g)
By setting the p=0.1 you can see that some nodes do not have edges.
For the weights you can do something like:
g.ecount() # to find the number of edges
g.es["weights"] = range(1, g.ecount())
g.es["label"] = weights
plot(g)
Result:

Get the most probable color from a words set

Are there any libraries existing or methods that let you to figure out the most probable color for a words set? For example, cucumber, apple, grass, it gives me green color. Did anyone work in that direction before?
If i have to do that, i will try to search images based on the words using google image or others and recognize the most common color of top n results.
That sounds like a pretty reasonable NLP problem and one thats very easy to handle via map-reduce.
Identify a list of words and phrases that you call colors ['blue', 'green', 'red', ...].
Go over a large corpus of sentences, and for the sentences that mention a particular color, for every other word in that sentence, note down (word, color_name) in a file. (Map Step)
Then for each word you have seen in your corpus, aggregate all the colors you have seen for it to get something like {'cucumber': {'green': 300, 'yellow': 34, 'blue': 2}, 'tomato': {'red': 900, 'green': 430'}...} (Reduce Step)
Provided you use a large enough corpus (something like wikipedia), and you figure out how to prune really small counts, rare words, you should be able to make pretty comprehensive and robust dictionary mapping millions of the items to their colors.
Another way to do that is to do a text search in google for combinations of colors and the word in question and take the combination with the highest number of results. Here's a quick Python script for that:
import urllib
import json
import itertools
def google_count(q):
query = urllib.urlencode({'q': q})
url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query
search_response = urllib.urlopen(url)
search_results = search_response.read()
results = json.loads(search_results)
data = results['responseData']
return int(data['cursor']['estimatedResultCount'])
colors = ['yellow', 'orange', 'red', 'purple', 'blue', 'green']
# get a list of google search counts
res = [google_count('"%s grass"' % c) for c in colors]
# pair the results with their corresponding colors
res2 = list(itertools.izip(res, colors))
# get the color with the highest score
print "%s is %s" % ('grass', sorted(res2)[-1][1])
This will print:
grass is green
Daniel's and Xi.lin's answers are very good ideas. Along the same axis, we could combine both with an approach similar to Xilin's but more simple: Query Google Image with the word you want to find the color associated with + a "Color" filter (see in the lower left bar). And see which color yields more results.
I would suggest using a tightly defined set of sources if possible such as Wikipedia and Wordnet.
Here, for example, is Wordnet for "panda":
S: (n) giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
(large black-and-white herbivorous mammal of bamboo forests of China and Tibet;
in some classifications considered a member of the bear family or of a separate
family Ailuropodidae)
S: (n) lesser panda, red panda, panda, bear cat, cat bear,
Ailurus fulgens (reddish-brown Old World raccoon-like carnivore;
in some classifications considered unrelated to the giant pandas)
Because of the concise, carefully constructed language it is highly likely that any colour words will be important. Here you can see that pandas are both black-and-white and reddish-brown.
If you identify subsections of Wikipedia (e.g. "Botanical Description") this will help to increase the relevance of your results. Also the first image in Wikipedia is very likely to be the best "definitive" one.
But, as with all statistical methods, you will get false positives (and negatives , though these are probably less of a problem).

Categories