Can YOLOV5​ output string data? - yolov5

I'm working on my school project about object detection but we stuck on how can we make our model output string data not an image. Is it even possible to make a model that output string data using YOLOV5? By the way, we tried using other YOLO before but we seem to cannot make it work out, we can only use YOLOV5. Is there a way for us?

Sure, it's possible from the cmd, an argument of detect.py it's "--save-txt". It's gonna save a file .txt of the inference.

Related

Autodesk design automation Revit , text file as input

The Revit API I developed, take a text file as input.
the text file looks like as below......
1.002, 20,502, 21.706
12.502, 5,502, 7.706
21.002, 15,502, 14.706
.....................
.....................
(The values are not correct.just imaginary. I am just showing how my text file looks like)
I am basically reading the text data as input.
Now if I want to convert the same API as Design automation API, I guess I will not be able to use "text file" as input.
My question is, what should be file type of input file, if it is consisted of 3d point coordinates as described above.
Should it be Json? If it need to be json, then how I should write it for point coordinates? or any other suggestion for file type will be a big help.
If there is any example code, will be a big help.
In the list for supported input file format, txt file is not included.
If I write a Json file, then please give me some clue, how should I arrange it and read the file for Revit.
Many thanks in advance.
T
Thank you for your query.
The slightly more complex question is how to generate multiple output files.
That is answered by the article
on How to generate dynamic number of output with Design Automation for Revit V3.
In passing, it also mentions multiple input files, saying:
"... For the zipped input file, it's well documented at https://forge.autodesk.com/en/docs/design-automation/v3/tutorials/revit/step6-post-workitem/, but for the output zipped result, it's not so clear..."
Trying to follow that link, I note that it is out of date.
The updated link is:
https://forge.autodesk.com/en/docs/design-automation/v3/tutorials/revit/step7-post-workitem/
Looking at the additional notes on input arguments, I see the instructions on how to pass JSON input data directly in the workitem itself.
I would assume that you can also use a different prefix instead of data:application/json such as data:application/text to pass in the data in its current form.
Please try that out and let us know how it works for you.
Alternatively, you can just stay on the safe side and convert your text data to JSON format.
There are innumerable ways of doing so.
The most minimalistic and simple would look like this:
[1.002, 20,502, 21.706,
12.502, 5,502, 7.706,
21.002, 15,502, 14.706,
...]
That represents on single array of doubles.
A slightly more structured approach might be to pass in an array of triples of doubles like this:
[[1.002, 20,502, 21.706],
[12.502, 5,502, 7.706],
[21.002, 15,502, 14.706],
...]
As you see, it is not hard.
I hope this helps.

Edit a large JSON file

How can I edit a large JSON manually?
I have a large JSON file, about 100 MB. I'd like to manually inspect some attributes, and then add more attributes to some of the objects.
I'd start off by looking at a subset of the file. Say, the 1st 100 objects. I'd gradually scale up to looking then at maybe 250, then a thousand, etc.
Can someone suggest a language or software (I'm running Windows) that excels at this task?
Some previous suggestion that aren't working or can't work.
Sublime - Could never load the file. Loading bar forever. Had to kill.
NotePad++ - Could never load. Froze. Had to kill.
Anything online - The data is confidential.
More Python and Jupyter information.
with open(path, 'r') as f:
data = json.load(f)
for i, (k, v) in enumerate(data.items()):
print(i, k, v)
if i == 2:
break
Causes an error. I think it has to do with Jupyter, but I'm not sure.
IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_data_rate_limit`.
Current values:
NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
NotebookApp.rate_limit_window=3.0 (secs)
That makes me wonder if going about it this way is just dumb.
Possible Solutions
Build a custom app using TKinter
Just don't use a Jupyter Notebook
What you can do is to write a simple GUI program. use TKinter, to create a window and a text area inside it to show the json, a text box where you will input, how many objects you want to see, and a button named Next or something to see next and one more button to save.The following will be the functionalities for each of the items.
First you will be reading complete json in python and making it a dict.
Next Button - This will keep iterating based on the value in the TextBox. you could write a custom generator, where it will be yielding based on the number of values required.
Save Button-: This will keep saving the current json into a new json or if you could, you can try to write a function to update the current json directly.
Text Area - you should take the dictionary and convert to json and show the output from the Next Button's generator.
If you are using linux (or have an opportunity to transfer the file to *nix) you might wish to check out for number of lines within a file via
wc -l myfile.json
Let's say, for the purpose of simplicity, that your file has 2530000 lines and you wish to split it into 100k lines each, you can utilize any of the commands available at your distro to split the file further into desired chunks and then to edit them, one by one.
If you are comfortable with going the "linux way", check out some of the hints given on other topics, i.e.
edit multi-GB file when vi editor doesn't work
I hope it helps!
The only viewer I have used that works on large files (I had up to 250MB size files) is Dadroit. It is fast to view and comes with search.
Now, to edit, I use vi. I search for the location and make local edits. Vim or another simpler editor should work on Windows. Have you tried vscode? 100MB shouldn't be too large for it.
The other awesome terminal tool for viewing and editing data is Visidata. I have had mixed luck with it working on json files.
Not the best answer, but the problem with reading the JSON seems limited to Jupyter Notebooks (or even the limitations of my laptop).
Working in Spyder or running from the command line circumvents the Jupyter error mentioned in the original question.
It'd be great if someone knew how to tweak Jupyter to avoid this problem (sorry, I'm not sure how yet).
for editor,try notepad++
for language, try Python
since you haven't give your data structure, I can't give more answer.

AWS Glue Crawler Classifies json file as UNKNOWN

I'm working on an ETL job that will ingest JSON files into a RDS staging table. The crawler I've configured classifies JSON files without issue as long as they are under 1MB in size. If I minify a file (instead of pretty print) it will classify the file without issue if the result is under 1MB.
I'm having trouble coming up with a workaround. I tried converting the JSON to BSON or GZIPing the JSON file but it is still classified as UNKNOWN.
Has anyone else run into this issue? Is there a better way to do this?
I have two json files which are 42mb and 16mb, partitioned on S3 as path:
s3://bucket/stg/year/month/_0.json
s3://bucket/stg/year/month/_1.json
I had the same problem as you, crawler classification as UNKNOWN.
I were able to solved it:
You must create custom classifier with jsonPath as "$[*]" then create new crawler with the classifier.
Run your new crawler with the data on S3 and proper schema will be created.
DO NOT update your current crawler with the classifier as it won't apply the change, I don't know why, maybe because of classifier versioning AWS mentioned in their documents. Create new crawler make them work
As mentioned in
https://docs.aws.amazon.com/glue/latest/dg/custom-classifier.html#custom-classifier-json
When you run a crawler using the built-in JSON classifier, the entire file is used to define the schema. Because you don’t specify a JSON path, the crawler treats the data as one object, that is, just an array.
That is something which Dung also pointed out in his answer.
Please also note that file encoding can lead to JSON being classified as UNKNOWN. Please try and re-encode the file as UTF-8.

Move Gherkin's (i.e. Cucumber and Behave) Data Table to JSON

I was using Behave and Selenium to test on something that use a large amount of data. Data tables were becoming too big and making the Gherkin documentation unreadable.
I would like to move most of the data from data tables to external file such as JSON. But I couldn't find any examples on websites.
I cannot offer an example at the moment, but I would create the JSON file as needed and give reference to the JSON file in Given or Background , then capture the value in the respective decorated method.

MFC :: Modify the values in a json file

I'm able to parse json files in MFC but is having a hard time modifying the values. Is there an easier way writing new values, other than converting it to native file types, modifying the contents and converting it back to json again?
I thought it would be as easy as changing values in an XML file where you just look for the tag and change it's value.
thanks...
You can use JSON Spirit library. The way it traverses through the json file is through it's key and value which is treated as a "pair". All you have to do is loop through the objects and search for the pair you want to replace. That's it...
The details aren't shown here, but pretty much gives you the basics -> http://www.codeproject.com/KB/recipes/JSON_Spirit.aspx. It's got a bunch of methods you could use for whatever operation you want.
:)