Image merging in plotly with transparency - plotly-dash

I'm struggling to come up with a solution using Plotly to reproduce the following code which uses matplotlib.
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(10, 5), dpi=100)
image = np.random.random([300, 5000])
image2 = np.ones([300, 5000])
plt.imshow(image, vmin=0, vmax=1, cmap="gray_r", aspect="auto", alpha=0.5)
plt.imshow(image2, vmin=0, vmax=1, cmap="seismic", aspect="auto", alpha=0.5)
plt.show()
My goal is to merge two arrays together using transparency parameters for each array. One array is displayed in grey color, another in red color (or seismic). Plotly imshow does not have alpha parameter, so I'm a bit confused about how to implement this simple code using plotly. I need this for my dash plotly app.

Related

How to determine the diference in two images for a particular land use type

I am working on 2 images, image-1 is a xarray DataArray, image-2 is a raster .tif data. I want to overlay the 2 data to see the land use types (image-2) that falls within a particular value in the xarray (image-1). Below is my code:
import netCDF4 as nc
import xarray as xr
import rasterio
import rioxarray
#import the dataset
era_5 = (r'F:\2ND_ARTICLE_II\ERA-5\ERA-5_All_Nigin.nc')
era_5 = xr.open_dataset(era_5)
era_5 = era_5['tp']
#import the tiff
lulc1 = rioxarray.open_rasterio(r'F:\2ND_ARTICLE_II\LULC\lulc_clp_Nig.tif', masked=True)
Now my question is how to determine the image deference that corresponds to a particular land use type between the two images.

About input_shape in keras.layers from tensorflow

I am a beginner for tensorflow. I had just tried to fit a simple LeNet-5 for mnist data.
My training and test data are first in Numpy format. i.e., (60000, 28, 28). Then I set my model as below.
model_LeNet5 = Sequential([
layers.Conv2D(6, kernel_size=3, strides=1, input_shape=(28, 28, 1)),
layers.MaxPooling2D(pool_size=2,strides=2),
layers.ReLU(),
layers.Conv2D(16,kernel_size=3,strides=1),
layers.MaxPooling2D(pool_size=2,strides=2),
layers.ReLU(),
layers.Flatten(),
layers.Dense(120, activation='relu'),
layers.Dense(84, activation='relu'),
layers.Dense(10)
])
I could understand that I get success when I set input_shape as (28,28) or train_images.shape[1:], but I can not understand that input_shape = (28,28,1) is also worked (shown as code above).
It seems that there is an inconsistancy between the shape of data and setting of input size (i.e., [60000,28,28] vs [28,28,1]). Also the broadcast rule may not link [60000,28,28] with [28,28,1].
Thanks for anyone who will explain the mechanism of input_shape.
A single grayscale image can be represented using a two-dimensional (2D) NumPy array or a tensor. Since there is only one channel in a grayscale image, we don’t need an extra dimension to represent the color channel. The two dimensions represent the height and width of the image.
A batch of 3 grayscale images can be represented using a three-dimensional (3D) NumPy array or a tensor. Here, we need an extra dimension to represent the number of images.
For more information, check out this article on towardsdatascience.

Bounding boxes around characters for tesseract 4.0.0-beta.1

I am trying to do number plate recognition using tesseract 4.0.0-beta.1. In tesseract documentation, it is told to create box files in the form . I tried using "makebox" function. But, it is not detecting every character properly. Then, somewhere i read that this function is for version 3.x.
I later tried "wordstrbox" function. But the box file which is created in this way is empty. Can someone tell me how to create box files for tesseract 4.0.0-beta.1.
Use pytesseract.image_to_data()
import pytesseract
import cv2
from pytesseract import Output
img = cv2.imread('image.jpg')
d = pytesseract.image_to_data(img, output_type=Output.DICT)
n_boxes = len(d['level'])
for i in range(n_boxes):
(text,x,y,w,h) = (d['text'][i],d['left'][i],d['top'][i],d['width'][i],d['height'][i])
cv2.rectangle(img, (x,y), (x+w,y+h) , (0,255,0), 2)
cv2.imshow('img',img)
cv2.waitkey(0)
Among the data returned by pytesseract.image_to_data():
left is the distance from the upper-left corner of the bounding box,
to the left border of the image.
top is the distance from the upper-left corner of the bounding box,
to the top border of the image.
width and height are the width and height of the bounding box.
conf is the model's confidence for the prediction for the word within
that bounding box. If conf is -1, that means that the corresponding
bounding box contains a block of text, rather than just a single
word.
The bounding boxes returned by pytesseract.image_to_boxes() enclose letters so I believe pytesseract.image_to_data() is what you're looking for.
I've found AlfyFaisy's answer very helpful and just wanted to share the code to view the bounding boxes of single characters. The differences regard the keys of the dictionary that is output by the image_to_boxes method:
import pytesseract
import cv2
from pytesseract import Output
img = cv2.imread('image.png')
height = img.shape[0]
width = img.shape[1]
d = pytesseract.image_to_boxes(img, output_type=Output.DICT)
n_boxes = len(d['char'])
for i in range(n_boxes):
(text,x1,y2,x2,y1) = (d['char'][i],d['left'][i],d['top'][i],d['right'][i],d['bottom'][i])
cv2.rectangle(img, (x1,height-y1), (x2,height-y2) , (0,255,0), 2)
cv2.imshow('img',img)
cv2.waitKey(0)
At least on my machine (Python 3.6.8, cv2 4.1.0) the cv2 method is waitKey(0) with a capital K.
This is the output I got:

Random graph generator

I am interested in generating weighted, directed random graphs with node constraints. Is there a graph generator in R or Python that is customizable? The only one I am aware of is igraph's erdos.renyi.game() but I am unsure if one can customize it.
Edit: the customizations I want to make are 1) drawing a weighted graph and 2) constraining some nodes from drawing edges.
In igraph python, you can use link the Erdos_Renyi class.
For constraining some nodes from drawing edges, this is controlled by the p value.
Erdos_Renyi(n, p, m, directed=False, loops=False) #these are the defaults
Example:
from igraph import *
g = Graph.Erdos_Renyi(10,0.1,directed=True)
plot(g)
By setting the p=0.1 you can see that some nodes do not have edges.
For the weights you can do something like:
g.ecount() # to find the number of edges
g.es["weights"] = range(1, g.ecount())
g.es["label"] = weights
plot(g)
Result:

Matplotlib/Pyplot: How to zoom subplots together?

I have plots of 3-axis accelerometer time-series data (t,x,y,z) in separate subplots I'd like to zoom together. That is, when I use the "Zoom to Rectangle" tool on one plot, when I release the mouse all 3 plots zoom together.
Previously, I simply plotted all 3 axes on a single plot using different colors. But this is useful only with small amounts of data: I have over 2 million data points, so the last axis plotted obscures the other two. Hence the need for separate subplots.
I know I can capture matplotlib/pyplot mouse events (http://matplotlib.sourceforge.net/users/event_handling.html), and I know I can catch other events (http://matplotlib.sourceforge.net/api/backend_bases_api.html#matplotlib.backend_bases.ResizeEvent), but I don't know how to tell what zoom has been requested on any one subplot, and how to replicate it on the other two subplots.
I suspect I have the all the pieces, and need only that one last precious clue...
-BobC
The easiest way to do this is by using the sharex and/or sharey keywords when creating the axes:
from matplotlib import pyplot as plt
ax1 = plt.subplot(2,1,1)
ax1.plot(...)
ax2 = plt.subplot(2,1,2, sharex=ax1)
ax2.plot(...)
You can also do this with plt.subplots, if that's your style.
fig, ax = plt.subplots(3, 1, sharex=True, sharey=True)
Interactively this works on separate axes
for ax in fig.axes:
ax.set_xlim(0, 50)
fig.draw()