Using dxfwrite or ezdxf to create dxf text in z direction - dxf

I would like to use either dxfwrite or ezdxf to create text along (WCS) y direction, and with height in the (WCS) z direction.
Using autocad, I have done this by setting UCS and entering text.
How can I do in dxfwrite or ezdxf (or any other python friendly library)?
dxf.ucs('textucs',xaxis=(0.,1.,0),yaxis=(0.,0.,1.))
lab = dxf.mtext('hello',np.array([0.,0.,.5]),layer='mylay',height=0.3)
doesn't work, presumably because I have only created UCS, and am not using it.

Defining an UCS does nothing, dxfwrite/ezdxf are not CAD applications.
This example uses ezdxf to write a text in the YZ-plane:
import ezdxf
dwg = ezdxf.new('ac1015')
modelspace = dwg.modelspace()
modelspace.add_mtext("This is a text in the YZ-plane",
dxfattribs={
'width': 12, # reference rectangle width
'text_direction': (0, 1, 0), # write in y direction
'extrusion': (1, 0, 0) # normal vector of the text plane
})
dwg.saveas('mtext_in_yz_plane.dxf')
mtext in dxfwrite is just a bunch of TEXT entities, because the MTEXT entity requires DXF13 or later.

Related

About input_shape in keras.layers from tensorflow

I am a beginner for tensorflow. I had just tried to fit a simple LeNet-5 for mnist data.
My training and test data are first in Numpy format. i.e., (60000, 28, 28). Then I set my model as below.
model_LeNet5 = Sequential([
layers.Conv2D(6, kernel_size=3, strides=1, input_shape=(28, 28, 1)),
layers.MaxPooling2D(pool_size=2,strides=2),
layers.ReLU(),
layers.Conv2D(16,kernel_size=3,strides=1),
layers.MaxPooling2D(pool_size=2,strides=2),
layers.ReLU(),
layers.Flatten(),
layers.Dense(120, activation='relu'),
layers.Dense(84, activation='relu'),
layers.Dense(10)
])
I could understand that I get success when I set input_shape as (28,28) or train_images.shape[1:], but I can not understand that input_shape = (28,28,1) is also worked (shown as code above).
It seems that there is an inconsistancy between the shape of data and setting of input size (i.e., [60000,28,28] vs [28,28,1]). Also the broadcast rule may not link [60000,28,28] with [28,28,1].
Thanks for anyone who will explain the mechanism of input_shape.
A single grayscale image can be represented using a two-dimensional (2D) NumPy array or a tensor. Since there is only one channel in a grayscale image, we don’t need an extra dimension to represent the color channel. The two dimensions represent the height and width of the image.
A batch of 3 grayscale images can be represented using a three-dimensional (3D) NumPy array or a tensor. Here, we need an extra dimension to represent the number of images.
For more information, check out this article on towardsdatascience.

Interpretation of yolov5 output

I am making a face mask detection project and I trained my model using ultralytics/yolov5.I saved the trained model as an onnx file, you can find the model file here model.onnx. Now I want you use this model.onnx with opencv to detect real time face mask. The input image size during training was 320*320. You can visualize this model using netron.
I have written this code to capture the image using webcam and pass it to model.onnx to predict my bounding boxes. The code is as follows:
def predict(img):
session = onnxruntime.InferenceSession(model_path)
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
img = img.reshape((1,3,320,320))
data = json.dumps({'data':img.tolist()})
data = np.array(json.loads(data)['data']).astype('float32')
result = session.run([output_name],{input_name:data})
result = np.array(result)
print(result.shape)
The output of result.shape is (1, 1, 3, 40, 40, 85)
Can anyone help me in interpreting this shape and how can i use this result array to predict my class, bounding box and confidence.
I've never worked with a pure yolov5 model, but here's the output format for yolov5s. It looks like it should be similar.
ouput tensor structure (yolov5s):
output_tensor[a, b, c, d]
a -> image index (If you're input is a batch of images, this tells you which image's output you're looking at. If your input is just one image, leave this as 0.)
b -> index of image in batch
c -> information about bounding box
0, 1 -> x and y coordinate of bounding box center
2, 3 -> width and height of bounding box
4 -> bounding box confidence
5 - 85 -> single class confidences
d -> index of proposed bounding boxes

Bounding boxes around characters for tesseract 4.0.0-beta.1

I am trying to do number plate recognition using tesseract 4.0.0-beta.1. In tesseract documentation, it is told to create box files in the form . I tried using "makebox" function. But, it is not detecting every character properly. Then, somewhere i read that this function is for version 3.x.
I later tried "wordstrbox" function. But the box file which is created in this way is empty. Can someone tell me how to create box files for tesseract 4.0.0-beta.1.
Use pytesseract.image_to_data()
import pytesseract
import cv2
from pytesseract import Output
img = cv2.imread('image.jpg')
d = pytesseract.image_to_data(img, output_type=Output.DICT)
n_boxes = len(d['level'])
for i in range(n_boxes):
(text,x,y,w,h) = (d['text'][i],d['left'][i],d['top'][i],d['width'][i],d['height'][i])
cv2.rectangle(img, (x,y), (x+w,y+h) , (0,255,0), 2)
cv2.imshow('img',img)
cv2.waitkey(0)
Among the data returned by pytesseract.image_to_data():
left is the distance from the upper-left corner of the bounding box,
to the left border of the image.
top is the distance from the upper-left corner of the bounding box,
to the top border of the image.
width and height are the width and height of the bounding box.
conf is the model's confidence for the prediction for the word within
that bounding box. If conf is -1, that means that the corresponding
bounding box contains a block of text, rather than just a single
word.
The bounding boxes returned by pytesseract.image_to_boxes() enclose letters so I believe pytesseract.image_to_data() is what you're looking for.
I've found AlfyFaisy's answer very helpful and just wanted to share the code to view the bounding boxes of single characters. The differences regard the keys of the dictionary that is output by the image_to_boxes method:
import pytesseract
import cv2
from pytesseract import Output
img = cv2.imread('image.png')
height = img.shape[0]
width = img.shape[1]
d = pytesseract.image_to_boxes(img, output_type=Output.DICT)
n_boxes = len(d['char'])
for i in range(n_boxes):
(text,x1,y2,x2,y1) = (d['char'][i],d['left'][i],d['top'][i],d['right'][i],d['bottom'][i])
cv2.rectangle(img, (x1,height-y1), (x2,height-y2) , (0,255,0), 2)
cv2.imshow('img',img)
cv2.waitKey(0)
At least on my machine (Python 3.6.8, cv2 4.1.0) the cv2 method is waitKey(0) with a capital K.
This is the output I got:

Random graph generator

I am interested in generating weighted, directed random graphs with node constraints. Is there a graph generator in R or Python that is customizable? The only one I am aware of is igraph's erdos.renyi.game() but I am unsure if one can customize it.
Edit: the customizations I want to make are 1) drawing a weighted graph and 2) constraining some nodes from drawing edges.
In igraph python, you can use link the Erdos_Renyi class.
For constraining some nodes from drawing edges, this is controlled by the p value.
Erdos_Renyi(n, p, m, directed=False, loops=False) #these are the defaults
Example:
from igraph import *
g = Graph.Erdos_Renyi(10,0.1,directed=True)
plot(g)
By setting the p=0.1 you can see that some nodes do not have edges.
For the weights you can do something like:
g.ecount() # to find the number of edges
g.es["weights"] = range(1, g.ecount())
g.es["label"] = weights
plot(g)
Result:

Matplotlib/Pyplot: How to zoom subplots together?

I have plots of 3-axis accelerometer time-series data (t,x,y,z) in separate subplots I'd like to zoom together. That is, when I use the "Zoom to Rectangle" tool on one plot, when I release the mouse all 3 plots zoom together.
Previously, I simply plotted all 3 axes on a single plot using different colors. But this is useful only with small amounts of data: I have over 2 million data points, so the last axis plotted obscures the other two. Hence the need for separate subplots.
I know I can capture matplotlib/pyplot mouse events (http://matplotlib.sourceforge.net/users/event_handling.html), and I know I can catch other events (http://matplotlib.sourceforge.net/api/backend_bases_api.html#matplotlib.backend_bases.ResizeEvent), but I don't know how to tell what zoom has been requested on any one subplot, and how to replicate it on the other two subplots.
I suspect I have the all the pieces, and need only that one last precious clue...
-BobC
The easiest way to do this is by using the sharex and/or sharey keywords when creating the axes:
from matplotlib import pyplot as plt
ax1 = plt.subplot(2,1,1)
ax1.plot(...)
ax2 = plt.subplot(2,1,2, sharex=ax1)
ax2.plot(...)
You can also do this with plt.subplots, if that's your style.
fig, ax = plt.subplots(3, 1, sharex=True, sharey=True)
Interactively this works on separate axes
for ax in fig.axes:
ax.set_xlim(0, 50)
fig.draw()