Autodesk design automation Revit , text file as input - autodesk-forge

The Revit API I developed, take a text file as input.
the text file looks like as below......
1.002, 20,502, 21.706
12.502, 5,502, 7.706
21.002, 15,502, 14.706
.....................
.....................
(The values are not correct.just imaginary. I am just showing how my text file looks like)
I am basically reading the text data as input.
Now if I want to convert the same API as Design automation API, I guess I will not be able to use "text file" as input.
My question is, what should be file type of input file, if it is consisted of 3d point coordinates as described above.
Should it be Json? If it need to be json, then how I should write it for point coordinates? or any other suggestion for file type will be a big help.
If there is any example code, will be a big help.
In the list for supported input file format, txt file is not included.
If I write a Json file, then please give me some clue, how should I arrange it and read the file for Revit.
Many thanks in advance.
T

Thank you for your query.
The slightly more complex question is how to generate multiple output files.
That is answered by the article
on How to generate dynamic number of output with Design Automation for Revit V3.
In passing, it also mentions multiple input files, saying:
"... For the zipped input file, it's well documented at https://forge.autodesk.com/en/docs/design-automation/v3/tutorials/revit/step6-post-workitem/, but for the output zipped result, it's not so clear..."
Trying to follow that link, I note that it is out of date.
The updated link is:
https://forge.autodesk.com/en/docs/design-automation/v3/tutorials/revit/step7-post-workitem/
Looking at the additional notes on input arguments, I see the instructions on how to pass JSON input data directly in the workitem itself.
I would assume that you can also use a different prefix instead of data:application/json such as data:application/text to pass in the data in its current form.
Please try that out and let us know how it works for you.
Alternatively, you can just stay on the safe side and convert your text data to JSON format.
There are innumerable ways of doing so.
The most minimalistic and simple would look like this:
[1.002, 20,502, 21.706,
12.502, 5,502, 7.706,
21.002, 15,502, 14.706,
...]
That represents on single array of doubles.
A slightly more structured approach might be to pass in an array of triples of doubles like this:
[[1.002, 20,502, 21.706],
[12.502, 5,502, 7.706],
[21.002, 15,502, 14.706],
...]
As you see, it is not hard.
I hope this helps.

Related

Extract JSON Data From ThingSpeak API

so i want to get the value from one oof my fields in my thingspeak, i'm able to extract data from my channels but i want to get only one specific field
i read the documentation and the api link that looks like this
https://api.thingspeak.com/channels/<channel_id>/feeds.json?results=1
and when i opened the link it showed this
{"channel":{"id":1688112,"name":"ESP8266 - Web Controlled LED","latitude":"0.0","longitude":"0.0","field1":"Command","field2":"Red LED","field3":"Green LED","field4":"Blue Led","created_at":"2022-03-29T00:36:06Z","updated_at":"2022-04-06T03:12:36Z","last_entry_id":443},"feeds":[{"created_at":"2022-04-10T07:06:01Z","entry_id":443,"field1":null,"field2":"0","field3":"0","field4":"0"}]}
so my question is how do i extract the data for example from my field2 data where "field2":"0"?
i want to use it for my project in my html where later it can do some functions in my content.
thanks!
It really depends on the program you use.
But usually you find a JSON library to be installed in your IDE.
With it you extract any field from the JSON file

Trying to parse a JSON file but it seems the format is different or something is wrong with the JSON file

Hi I'm trying to parse any of the files from the link underneath. I've tried reaching out to the owner of the data dumps, but nothing works in trying to parse the files as proper JSON files. No program we use (Power BI, Jupyter, Excel) anything really, wants to recognise the files as JSON and we can't figure out why this might be. I was wondering if anyone could help figuring out what the issue is here as this dataset is very interesting to me and my co-students. I hope I'm using the word 'parsing' correctly.
The link to the data dumps is linked underneath:
https://files.pushshift.io/reddit/comments/
The file I downloaded (I just tried one at random) was handled just fine by jq, my preferred command-line tool for processing JSON files.
jq accepts an input consisting of a sequence of JSON objects, which is what I found when I decompressed the test file. This format is commonly known as JSON lines, and many tools can handle it. The Wikipedia article on JSON streaming contains more information and a (possibly outdated) list of tools.
If your tools aren't capable of handling more than one JSON object in an input, you could turn the files into something which you can handle by adding a comma to the end of every line except the last one (since each JSON object is a single line) and then surrounding the whole input inside a pair of brackets to turn the sequence into a JSON list. Since JSON does not actually care about newlines, it would be sufficient to add a line containing [ at the beginning and a line containing ] at the end. I don't know what command-line tools you have available and are comfortable with, but the task shouldn't be too difficult.

Edit a large JSON file

How can I edit a large JSON manually?
I have a large JSON file, about 100 MB. I'd like to manually inspect some attributes, and then add more attributes to some of the objects.
I'd start off by looking at a subset of the file. Say, the 1st 100 objects. I'd gradually scale up to looking then at maybe 250, then a thousand, etc.
Can someone suggest a language or software (I'm running Windows) that excels at this task?
Some previous suggestion that aren't working or can't work.
Sublime - Could never load the file. Loading bar forever. Had to kill.
NotePad++ - Could never load. Froze. Had to kill.
Anything online - The data is confidential.
More Python and Jupyter information.
with open(path, 'r') as f:
data = json.load(f)
for i, (k, v) in enumerate(data.items()):
print(i, k, v)
if i == 2:
break
Causes an error. I think it has to do with Jupyter, but I'm not sure.
IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_data_rate_limit`.
Current values:
NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
NotebookApp.rate_limit_window=3.0 (secs)
That makes me wonder if going about it this way is just dumb.
Possible Solutions
Build a custom app using TKinter
Just don't use a Jupyter Notebook
What you can do is to write a simple GUI program. use TKinter, to create a window and a text area inside it to show the json, a text box where you will input, how many objects you want to see, and a button named Next or something to see next and one more button to save.The following will be the functionalities for each of the items.
First you will be reading complete json in python and making it a dict.
Next Button - This will keep iterating based on the value in the TextBox. you could write a custom generator, where it will be yielding based on the number of values required.
Save Button-: This will keep saving the current json into a new json or if you could, you can try to write a function to update the current json directly.
Text Area - you should take the dictionary and convert to json and show the output from the Next Button's generator.
If you are using linux (or have an opportunity to transfer the file to *nix) you might wish to check out for number of lines within a file via
wc -l myfile.json
Let's say, for the purpose of simplicity, that your file has 2530000 lines and you wish to split it into 100k lines each, you can utilize any of the commands available at your distro to split the file further into desired chunks and then to edit them, one by one.
If you are comfortable with going the "linux way", check out some of the hints given on other topics, i.e.
edit multi-GB file when vi editor doesn't work
I hope it helps!
The only viewer I have used that works on large files (I had up to 250MB size files) is Dadroit. It is fast to view and comes with search.
Now, to edit, I use vi. I search for the location and make local edits. Vim or another simpler editor should work on Windows. Have you tried vscode? 100MB shouldn't be too large for it.
The other awesome terminal tool for viewing and editing data is Visidata. I have had mixed luck with it working on json files.
Not the best answer, but the problem with reading the JSON seems limited to Jupyter Notebooks (or even the limitations of my laptop).
Working in Spyder or running from the command line circumvents the Jupyter error mentioned in the original question.
It'd be great if someone knew how to tweak Jupyter to avoid this problem (sorry, I'm not sure how yet).
for editor,try notepad++
for language, try Python
since you haven't give your data structure, I can't give more answer.

Angular 5 : How to integrate html data (which is a formatted text) in a .docx file?

I'm still a bit newbie in the code game, and i would like some advices from senpai.
Context :
I'm making a angular 5 app which has a form, which is using also QuillJS, a rich text editor for only one question (the previous questions are simple input field for strings or numbers). My goal is to allow my users to download the form and the text from QuillJS they completed, on a .docx file (Word). And of course i'm doing this because i want to keep the formatted text from QuillJs, otherwise i would have just get a good ol' string.
Issue :
The point is, i'm already building a docx file for the first questions of the form and the only method i found for now to put my html string from QuillJs in a Word readable data type, is to use html-docx-js library.
This post even explain how. But, BUT, i don't want to use saveAs function (see the post), that create a file and put the content in it. I want to put the content in the docx file i'm already creating.
So here is my question, how would you, senpai, do it ?
The thing is that i've got a Blob file (cf post), but i don't know how to put it in my docx file. I tried to see if FileReader function could do the job, but well... i don't get how to integrate this special Blob file type (which is : application/vnd.openxmlformats-officedocument.wordprocessingml.document) in the docx file.
Maybe there is another way, i'm open to any suggestions, i don't mind at all to change my way of doing.
Thank you. Save internet, give me a tip.
The official documentation for html-docx-js does not state any other options than the asBlob method. I suggest two options:
Decoding the DOCX:
The Blob filetype is not special. The blob is just binary representation of the docx. I found in SE question that the docs in fact zipped XML document. You could unzip it using JSZip or other JS solution, then read it using FileReader and try to deal with it in a DOM manner. I'm not qualified to go into details how that could work.
Adding HTML to the user input first and then outputting it as a whole
This is changing the way you want to do it. In this way, I would first create formatted HTML with the data you collected in other parts of the questionnaire. Then you append the rich data from the rich editor. At last you take this HTML data and save it into single file using the asBlob function.
The second solution will maybe strip some customization from your original approach, but it seems much faster to implement.

Load a CSV template and write data to it via java

I have a CSV template file, say, having 10 columns.
I would like to load this CSV file template, and then write data to the relevant cells(say only to 5 of the 10 cells) through a java program.
I went through JSAPAR, SuperCSV etc, but am not sure whether these libraries have the "stuff" what exactly I need.
Is there any framework supporting this kind of operations?
Checkout freemarker: http://freemarker.org/
Open your text file.
Enter freemarker paramerters for required cells.
Your template file may look something like below:
"Templatetext1","text2","text4", "${myVal4}",${myVal5}","text6", ${myVal7}",${myVal8}",${myVal9}","textInCell10"
Pass in the values, you have your csv from template.
If you want to pass for multiple rows you can use other elements like <#list> etc.
OpenCSV is generally considered the best CSV toolkit for Java. It's a very lightweight library that makes working with CSV dead simple. I would recommend looking at it since it's not among the list of things you've tried yet.