Can't plot vertical line in High Timeframe base on condition in Low Timeframe (pinescript v4) - pine-script-v4

According to my photo I upload, I can't plot correctly the vertical line that appear in TF M5 to TF M15.
I already backtest to see what happen when my price meet the condition in M5.
It do appear correct in TF M15 when condition in M5 is true but when I switch between TF or just reload the page. It dissapear.
I upload here my code, please help me fix this. Many thanks!
//#version=4
// Big Bar Size can be used as Support resistent level.
study("bar_size 191022", shorttitle="Moment detect ", overlay=true)
body_limit=input(15,minval=1,title="Body Limit")
open0=security(syminfo.tickerid, "5",open)
close0=security(syminfo.tickerid, "5",close)
body = abs(open0-close0)
con_momen1=body > body_limit
con_up1=(open0 < close0)
con_buy1=con_momen1 and con_up1
bgcolor(con_buy1?color.silver:na,transp=0)

Related

Interpretation of yolov5 output

I am making a face mask detection project and I trained my model using ultralytics/yolov5.I saved the trained model as an onnx file, you can find the model file here model.onnx. Now I want you use this model.onnx with opencv to detect real time face mask. The input image size during training was 320*320. You can visualize this model using netron.
I have written this code to capture the image using webcam and pass it to model.onnx to predict my bounding boxes. The code is as follows:
def predict(img):
session = onnxruntime.InferenceSession(model_path)
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
img = img.reshape((1,3,320,320))
data = json.dumps({'data':img.tolist()})
data = np.array(json.loads(data)['data']).astype('float32')
result = session.run([output_name],{input_name:data})
result = np.array(result)
print(result.shape)
The output of result.shape is (1, 1, 3, 40, 40, 85)
Can anyone help me in interpreting this shape and how can i use this result array to predict my class, bounding box and confidence.
I've never worked with a pure yolov5 model, but here's the output format for yolov5s. It looks like it should be similar.
ouput tensor structure (yolov5s):
output_tensor[a, b, c, d]
a -> image index (If you're input is a batch of images, this tells you which image's output you're looking at. If your input is just one image, leave this as 0.)
b -> index of image in batch
c -> information about bounding box
0, 1 -> x and y coordinate of bounding box center
2, 3 -> width and height of bounding box
4 -> bounding box confidence
5 - 85 -> single class confidences
d -> index of proposed bounding boxes

Cut ultrasound signal between specific values using Octave

I have an ultrasound wave (graph axes: Volt vs microsecond) and need to cut the signal/wave between two specific value to further analyze this clipping. My idea is to cut the signal between 0.2 V (y-axis). The wave is sine shaped as shown in the figure with the desired cutoff points in red
In my current code, I'm cutting the signal between 1900 to 4000 ms (x-axis) (Aa = A(1900:4000);) and then I want to make the aforementioned clipping and proceed with the code.
Does anyone know how I could do this y-axis clipping?
Thanks!! :)
clear
clf
pkg load signal
for k=1:2
w=1
filename=strcat("PCB 2.1 (",sprintf("%01d",k),").mat")
load(filename)
Lthisrun=length(A);
Pico(k,1:Lthisrun)=A;
Aa = A(1900:4000);
Ah= abs(hilbert(Aa));
step=100;
hold on
i=1;
Ac=0;
for index=1:step:3601
Ac(i+1)=Ac(i)+Ah(i);
i=i+1
r(k)=trapz(Ac)
end
end
ok, you want to just look at values 'above the noise' in your data. Or, in this case, 'clip out' everything below 0.2V. the easiest way to do this is with logical indexing. You can take an array and create a sub array eliminating everything that doesn't meet a certain logical condition. See this example:
f = #(x) sin(x)./x;
x = [-100:.1:100];
y = f(x);
plot(x,y);
figure;
x_trim = x(y>0.2);
y_trim = y(y>0.2);
plot(x_trim, y_trim);
From your question it looks like you want to do the clipping after applying the horizontal windowing from 1900-4000. (you say that that is in milliseconds, but your image shows the pulse being much sooner than 1900 ms). In any case, something like
Ab = Aa(Aa > 0.2);
will create another array Ab that will only contain the portions of Aa with values above 0.2. You may need to do something similar (see the example) for the horizontal axis if your x-data is not just the element index.

tesseract didn't get the little labels

I've installed tesseract on my linux environment.
It works when I execute something like
# tesseract myPic.jpg /output
But my pic has some little labels and tesseract didn't see them.
Is an option is available to set a pitch or something like that ?
Example of text labels:
With this pic, tesseract doesn't recognize any value...
But with this pic:
I have the following output:
J8
J7A-J7B P7 \
2
40 50 0 180 190
200
P1 P2 7
110 110
\ l
For example, in this case, the 90 (on top left) is not seen by tesseract...
I think it's just an option to define or somethink like that, no ?
Thx
In order to get accurate results from Tesseract (as well as any OCR engine) you will need to follow some guidelines as can be seen in my answer on this post:
Junk results when using Tesseract OCR and tess-two
Here is the gist of it:
Use a high resolution image (if needed) 300 DPI is minimum
Make sure there is no shadows or bends in the image
If there is any skew, you will need to fix the image in code prior to ocr
Use a dictionary to help get good results
Adjust the text size (12 pt font is ideal)
Binarize the image and use image processing algorithms to remove noise
It is also recommended to spend some time training the OCR engine to receive better results as seen in this link: Training Tesseract
I took the 2 images that you shared and ran some image processing on them using the LEADTOOLS SDK (disclaimer: I am an employee of this company) and was able to get better results than you were getting with the processed images, but since the original images aren't the greatest - it still was not 100%. Here is the code I used to try and fix the images:
//initialize the codecs class
using (RasterCodecs codecs = new RasterCodecs())
{
//load the file
using (RasterImage img = codecs.Load(filename))
{
//Run the image processing sequence starting by resizing the image
double newWidth = (img.Width / (double)img.XResolution) * 300;
double newHeight = (img.Height / (double)img.YResolution) * 300;
SizeCommand sizeCommand = new SizeCommand((int)newWidth, (int)newHeight, RasterSizeFlags.Resample);
sizeCommand.Run(img);
//binarize the image
AutoBinarizeCommand autoBinarize = new AutoBinarizeCommand();
autoBinarize.Run(img);
//change it to 1BPP
ColorResolutionCommand colorResolution = new ColorResolutionCommand();
colorResolution.BitsPerPixel = 1;
colorResolution.Run(img);
//save the image as PNG
codecs.Save(img, outputFile, RasterImageFormat.Png, 0);
}
}
Here are the output images from this process:

How to avoid sikuli creating a png file when using "with Region"

I have the following code in sikulix (version 2015-01-06)
...
t = wait("total_power.png")
area = Region(t.x+t.w, t.y, 80, 31)
with Region(area):
wait("num_1.png")
....
I find that "with Region" will create a png file in the same directory of the python file. And the png file is the region that I want.
How can I avoid it?
What is is it that you are trying to do here?
Is it that you would like to wait until a window appears, and then look inside that window for another picture to appear?
In that case you alreay have defined the region when you found "t".
"t" is the location of the picture "total_power.png"
For example:
# Wait until the window appears.
p1 = wait("image1.png")
# Find another picture inside the window.
p2 = p1.wait("image2.png")
Edit:
You should have a look here: Link
I think you could use the .right(), if you leave the () empty you take everything.
If fill in a a value you take a part of the screen.
I use .hightlight() when programming to show me what region I am looking at.
You can also use region1.union(region2) to merge 2 region to a new one.
An example:
Image1 = ("image1.png")
class Blue():
def __init__(self):
# Find an image.
LocImage1 = find(Image1)
# Too show the user the region we selected, we can highlight if for 5 seconds.
LocImage1.highlight(5)
# Grab the region to the right of this image.
LocImage1RightSide = LocImage1.right()
# Highlight the region again.
LocImage1RightSide.highlight(5)
# Run class
Blue()

Matplotlib/Pyplot: How to zoom subplots together?

I have plots of 3-axis accelerometer time-series data (t,x,y,z) in separate subplots I'd like to zoom together. That is, when I use the "Zoom to Rectangle" tool on one plot, when I release the mouse all 3 plots zoom together.
Previously, I simply plotted all 3 axes on a single plot using different colors. But this is useful only with small amounts of data: I have over 2 million data points, so the last axis plotted obscures the other two. Hence the need for separate subplots.
I know I can capture matplotlib/pyplot mouse events (http://matplotlib.sourceforge.net/users/event_handling.html), and I know I can catch other events (http://matplotlib.sourceforge.net/api/backend_bases_api.html#matplotlib.backend_bases.ResizeEvent), but I don't know how to tell what zoom has been requested on any one subplot, and how to replicate it on the other two subplots.
I suspect I have the all the pieces, and need only that one last precious clue...
-BobC
The easiest way to do this is by using the sharex and/or sharey keywords when creating the axes:
from matplotlib import pyplot as plt
ax1 = plt.subplot(2,1,1)
ax1.plot(...)
ax2 = plt.subplot(2,1,2, sharex=ax1)
ax2.plot(...)
You can also do this with plt.subplots, if that's your style.
fig, ax = plt.subplots(3, 1, sharex=True, sharey=True)
Interactively this works on separate axes
for ax in fig.axes:
ax.set_xlim(0, 50)
fig.draw()