So I am working on an Octave script (I am relatively inexperienced with the language), and I am trying to open two csv files who's names I pass to my script as command line arguments. Here is my script:
#!/usr/bin/env octave
function plotregs(fig, regs)
figure(fig);
title('Foo');
xlabel('Value');
ylabel('Cycle #');
grid on;
plot(rows(regs(:, 1)), regs(:, 1),
rows(regs(:, 2)), regs(:, 2),
rows(regs(:, 3)), regs(:, 3),
rows(regs(:, 4)), regs(:, 4),
rows(regs(:, 5)), regs(:, 5),
rows(regs(:, 6)), regs(:, 6),
rows(regs(:, 7)), regs(:, 7),
rows(regs(:, 8)), regs(:, 8));
legend('A', 'B', 'C', 'D', 'E', 'F', 'H', 'L');
endfunction
args = argv ();
filename = strcat(cellstr(args(1)));
typeinfo filename
regs = csvread(filename);
graphics_toolkit("gnuplot");
plotregs(1, regs);
filename = strcat(cellstr(args(2)));
regs = csvread(filename);
plotregs(2, regs);
pause
And here is the output I get when I run the script:
ans = sq_string
error: dlmread: FILE argument must be a string or file id
error: called from:
error: /usr/share/octave/3.4.3/m/io/csvread.m at line 34, column 5
error: /home/tnecniv/Code/Octave/regigraph/regigraph.m at line 25, column 6
Any advice would be appreciated
The problem is that you create an executable Octave script which expects arguments yet do not provide any arguments.
First of all I would start the file as
#!/usr/bin/octave -qf
Then one could run the script as
$ ./myscript.sh datafile1.csv datafile2.csv
But in my opinion argv() behaves a bit strange, because when no arguments are given to -say myscript.sh-, it returns the filename of the executing script, but when one or more arguments are given it contains the arguments only.
You can refer to Section 2.6 of the documentation for "Executable Octave Programs".
Related
I am trying to use Deep3DFaceRecon_pytorch and the first step is to get your image trough MTCNN in order to get the landmarks for the face. I use the general demo code from MTCNN.
It works great and i get what i expected but i also need to save the json file results to a txt file.
from mtcnn.mtcnn import MTCNN
import cv2
image = cv2.imread('figure.jpg')
cv2.imshow('',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
detector = MTCNN()
faces = detector.detect_faces(image)
for face in faces:
print(face)
{'box': [142, 109, 237, 289], 'confidence': 0.9997594952583313, 'keypoints': {'left_eye': (212, 221), 'right_eye': (323, 223), 'nose': (265, 280), 'mouth_left': (209, 322), 'mouth_right': (319, 327)}}
def create_bbox(image):
faces = detector.detect_faces(image)
bounding_box = faces[0]['box']
keypoints = faces[0]['keypoints']
cv2.rectangle(image,
(bounding_box[0], bounding_box[1]),
(bounding_box[0]+bounding_box[2], bounding_box[1] + bounding_box[3]),
(0,155,255),
2)
cv2.circle(image,(keypoints['left_eye']), 2, (0,155,255), 2)
cv2.circle(image,(keypoints['right_eye']), 2, (0,155,255), 2)
cv2.circle(image,(keypoints['nose']), 2, (0,155,255), 2)
cv2.circle(image,(keypoints['mouth_left']), 2, (0,155,255), 2)
cv2.circle(image,(keypoints['mouth_right']), 2, (0,155,255), 2)
return image
marked_image = create_bbox(image)
cv2.imshow('',marked_image)
cv2.waitKey(0)
and i get this json file
**"{'box': [142, 109, 237, 289], 'confidence': 0.9997594952583313, 'keypoints': {'left_eye': (212, 221), 'right_eye': (323, 223), 'nose': (265, 280), 'mouth_left': (209, 322), 'mouth_right': (319, 327)}}"
**
It works perfectly, but i need to save these values to a txt file.
How do i do that?
You can convert the array of json into a single json object and write it into a file. Below shows how you can achieve that.
import json
json_result = {}
with open("result.txt","w") as result_file:
for n,face in enumerate(faces):
json_result[str(n)] = face
json_string = json.dumps(json_result,indent=4)
result_file.write(json_string)
I'm new to tensorflow and I've served the saved model to Google AI Platform models. However, I am having issues with the format of sample input data. Can you please guide how should I format the data input based on the requested format below? Thanks in advance.
To request an online prediction, it is required to input data instances as a JSON object as follow.
{
"instances": [
<value>|<simple/nested list>|<object>,
...
]
}
below is part of the outputs from $saved_model_cli --dir path /home/.. --all
In summary, I have 12 data inputs as string values. How should I put together in the above request format so the model can return prediction? Thanks!
Defined Functions:
Function Name: '__call__'
Option #1
Callable with:
Argument #1
DType: list
Value: [TensorSpec(shape=(None, 3), dtype=tf.float32, name='a_xf'), TensorSpec(shape=(None, 2), dtype=tf.float32, name='b_xf'), TensorSpec(shape=(None, 8), dtype=tf.float32, name='c_xf'), TensorSpec(shape=(None, 12), dtype=tf.float32, name='d_xf'), TensorSpec(shape=(None, 4), dtype=tf.float32, name='e_xf'), TensorSpec(shape=(None, 16), dtype=tf.float32, name='f_xf'), TensorSpec(shape=(None, 26), dtype=tf.float32, name='g_xf'), TensorSpec(shape=(None, 4), dtype=tf.float32, name='h_xf'), TensorSpec(shape=(None, 2), dtype=tf.float32, name='i_xf'), TensorSpec(shape=(None, 11), dtype=tf.float32, name='j_xf'), TensorSpec(shape=(None, 6), dtype=tf.float32, name='k_xf'), TensorSpec(shape=(None, 2), dtype=tf.float32, name='l_xf')]
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
i am trying to run this custom loss function in keras and i always run into this error bellow. This pairwaise constraint loss
def loss(y_true, y_pred):
pw = pairwise_distances(y_true, squared=False)
n, d = y_pred.get_shape()
# generate constraint data points
c1 = y_pred[pw[:, 0], :]
c2 = y_pred[pw[:, 1], :]
loss = np.zeros(dtype=np.float32, shape=(pw.shape[0], d * 2))
loss[:, :d] = np.abs(c1 - c2)
loss[:, d:] = (c1 + c2) / 2
return loss
Bellow is the error i get when i try to implement this loss function
File "C:\Users\Benji\Anaconda2\envs\ben\lib\site-packages\keras\engine\training.py", line 692, in _prepare_total_loss
y_true, y_pred, sample_weight=sample_weight)
File "C:\Users\Benji\Anaconda2\envs\ben\lib\site-packages\keras\losses.py", line 71, in __call__
losses = self.call(y_true, y_pred)
File "C:\Users\Benji\Anaconda2\envs\ben\lib\site-packages\keras\losses.py", line 132, in call
return self.fn(y_true, y_pred, **self._fn_kwargs)
File "C:/Users/Benji/PycharmProjects/Code/NEWWORK6.py", line 73, in loss
c1 = y_pred[pw[:, 0], :]
File "C:\Users\Benji\Anaconda2\envs\ben\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 766, in _slice_helper
_check_index(s)
File "C:\Users\Benji\Anaconda2\envs\ben\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 655, in _check_index
raise TypeError(_SLICE_TYPE_ERROR + ", got {!r}".format(idx))
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <tf.Tensor 'loss/activation_6_loss/loss/strided_slice:0' shape=(?,) dtype=float32>
Process finished with exit code 1
I was trying luigi multiprocessing capability by utilizing luigi.build method.
but i'm getting some library error while executing.
for next in self._add(item, is_complete):
File "/home/manoj/anaconda2/lib/python2.7/site-packages/luigi/worker.py", line 604, in _add
self._validate_dependency(d)
File "/home/manoj/anaconda2/lib/python2.7/site-packages/luigi/worker.py", line 622, in _validate_dependency
raise Exception('requires() must return Task objects')
here is piece of code i tried to achieve my objective.
import luigi
class TaskOne(luigi.Task):
custid= luigi.Parameter()
def requires(self):
pass
def output(self):
return luigi.LocalTarget("logs/"+str(self.custid)+"_success")
def run(self):
with self.output().open('w') as f:
f.write("%s\n" % '')
class TaskTwo(luigi.Task):
def requires(self):
customersList = ['A','B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
yield luigi.build([TaskOne(custid=cust_id) for cust_id in customersList], workers=2)
def output(self):
return luigi.LocalTarget("logs/overall_success.txt")
def run(self):
with self.output().open('w') as f:
f.write("%s\n" % "success")
if __name__ == '__main__':
luigi.run()
========================================================================
Why do you think you need to build in requires?
class TaskTwo(luigi.Task):
def requires(self):
customersList = ['A','B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
return [TaskOne(custid=cust_id) for cust_id in customersList]
If you want multiple workers, you can specify this at the command line when you start your pipeline.
luigi --module your_module TaskTwo --workers 2
requires() must return a luigi.Task object, or a list of luigi.Task objects. However, luigi.build() doesn't return anything. You don't need to call luigi.build to explicitly run the tasks, because Luigi handles running requirements on its own. The example task outlined in https://luigi.readthedocs.io/en/stable/tasks.html shows the basic paradigm of how it's supposed to work.
Also, you should omit requires() from TaskOne. If it has no dependencies, then there is no need to define it.
I have 3 very large files(+100 MB) file_hash, cert_hash and url_data each has one string per line. The problem is that size of data in all these files are not same. I have used izip_longest function to read all these files at once (can't load these files in memory) but I wanted to iterate it for the longest file (file_hash is longest) and suppose all data from cert_hash has been read it should start taking the values from beginning of cert_hash file and similarly if url_data got's over it also starts reading from beginning. I have tried using fillvalue parameter but it takes only one value, I wanted to give different value for cert_hash and url_data if they get over.
You should cycle cert_hash and url_data if you want them to restart. For example:
>>> from itertools import cycle, izip
>>> for t in izip("abcdef", cycle("ghi"), cycle("jklm")):
print t
('a', 'g', 'j')
('b', 'h', 'k')
('c', 'i', 'l')
('d', 'g', 'm')
('e', 'h', 'j')
('f', 'i', 'k')
Note that you no longer use izip_longest, as cycle is infinite.
If you want to restart at the end rather than the start, here is a tweak to the cycle equivalent implementation that achieves that:
>>> def zigzag(iterable):
"""zigzag('ABCD') --> A B C D C B A B C D ..."""
forward = []
for element in iterable:
yield element
forward.append(element)
backward = forward[-2:0:-1]
while True:
for element in backward:
yield element
for element in forward:
yield element
>>> z = zigzag("ABCD")
>>> for _ in range(10):
print next(z)
A
B
C
D
C
B
A
B
C
D