Just a little question. I mean, JSON is so good in the Web developing. So I try to ask you which APP or software can revert IFC to JSON.
IFC is a schema built on top of STEP (ISO 10303) format, so thats what would need to be converted, not the IFC aspect. It would be quite trivial to convert STEP syntax to JSON, in fact there is something called IFCxml (https://en.wikipedia.org/wiki/Industry_Foundation_Classes#IFC/ifcXML_specifications. ISO 10303-28) which is IFC using XML syntax instead of STEP syntax any way (JSON and XML being somewhat similar in structure, there are many XML -> JSON converters out there).
However there would be no point as this would not change the fact that the IFC still needs to be processed (not just read) by a client with an understanding of the schema. In this regard it makes no difference what syntax is used as the power of the IFC comes from the higher level heirachy of the schema, not the syntax that is used.
It would probably have the same problems as XML for IFC, i.e huge files in comparison to STEP so probably wouldn't be popular, even if there were tools which could process the IFC entities in JSON format.
OK. Now I find it is a new data format named as ifcjson can deal with it.
It is a new schema, simpler than IFC-SPF and IFCXML.
So maybe we need not a convertor, a new format is also a good choice.
Related
I'm aware that there are python and powershell methods to convert plain text files, csv's etc.... into json format for upload into NoSQL DBs such as CouchDB.
But according to the CouchDB definitive guide, it makes it seems like there is a native built in way of doing this kind of conversion, without the need for a 3rd party tool.
This older thread appears to hint at this:
Filter and update functions in CouchDB?
This part in particular:
There are other design document functions that are being introduced at the >time of this writing, including _update and _filter that we aren’t covering in >depth here. Filter functions are covered in Chapter 20, Change Notifications. >Imagine a web service that POSTs an XML blob at a URL of your choosing when >particular events occur. PayPal’s instant payment notification is one of >these. With an _update handler, you can POST these directly in CouchDB and it >can parse the XML into a JSON document and save it. The same goes for CSV, >multi-part form, or any other format.
But when I dig deeper I don't find anything concrete.
The supporting wiki link is not clear to me (a beginner with json/NoSQL/curl stuff: http://wiki.apache.org/couchdb/Document_Update_Handlers
Hopefully this is a simple yes/no. And any links to help on this that is better than the above link also appreciated, thank you!
CouchDB supports transforming the internal documents/views into many other formats through the use of show and list functions. It's not a "native" transformation, as you define the transformation yourself, it's not magical.
That being said, there is not a similar mechanism for the reverse (ie: converting some arbitrary format into JSON documents) but you're much better off scripting that with a full-featured language/script and using the bulk docs API to do your imports in batch.
Json is better than xml for sure, i was wondering if there is any case we should use xml instead of json
If speaking in terms of REST, neither is better. Plain XML or plain JSON does not say anything about data transferred in either format. Though if you use well known formats like:
application/atom+xml
application/vnd.collection+json
comparison will boil down to which format suits your needs better.
If you compare XML to JSON from programming language perspective, yes XML adds extra layer between code and data, though nothing special. Oh and XML is little verbose and larger in terms of bytes.
XML has been around for a long time, and there's a lot of tools in place that JSON does not yet have, are not commonplace or ubiquitous.
XML has XSchema, RelaxNG, DTD. JSON does have an equivalent but it's not as common place.
XML has namespacing, which is great for mixing different document types. JSON does have some ideas on how to do namespacing (such as JSON-LD) but doing this correctly tends to take why people tend to enjoy JSON over XML for.
Namespacing in XML is everywhere, which gives you a very standard framework to re-use existing XML schemas for integration.
So I don't want to say, "you should do XML" or "you should do JSON", but I would rather say that if you need to integrate with existing XML systems, or you needs would strongly benefit from features such as namespacing, schemas, linking, re-use of existing XML documents, XSLT, etc... XML might be a better choice.
I would love to use protocol buffers, but I am not sure if they fit my use case. Here it is:
I have a Quiz app. This requires a bunch of data, like categories, questions, a list of answers (and which ones are correct). I do not want to be responsible for entering this data - I would prefer to pass it off to a non-programmer to serialize all this data for me, in either XML or JSON. Then my app would just read in the data file.
Does Google's Protocol Buffers fit my use case? Or should I stick to a more traditional format like XML or JSON?
I think not: Protobuf is a binary format. So then you would need to support a text format like XML or JSON and Protobuf.
Also it does not seem you would benefit from Protobufs better berformance at all.
This is a two part question.
1) Is there any way to get a csv file of all the entity data, including xdata, for an autocad dwg, either using autocad or some other method?
2) Is there an easy way to parse a autocad dxf file to get the entity data into a csv file?
Unfortunately, neither approach provides an easy method, but it is possible with a little effort.
With a DWG file, the file itself is binary so your best bet would be to write a plugin or script to AutoCAD, using .NET or ObjectArx, but this may be a troublesome approach. AutoLISP would be easier, but I don't think you could output to a file.
Getting the enitity data out of a DXF would be significantly easier, since the DXF is primarily a text format. This would be possible with any programming language, but since there are many possible entities it would take some effort to handle all of the cases. The DXF reference is available at the AutoDESK website. XData is certainly also included in the DXF in a text format, so that shouldn't be a problem.
You can write output to a file using autolisp, even binary output with some slight of hand. However, writing dxf data to a csv file, with or without xdata, by either reading the data directly (in-situ) or by parsing a dxf file, is completely impractical, given the nature of dxf group codes and associated data. Perhaps the OP can identify what he wants to achieve, rather than specifying what appears to me to be an inappropriate format for the data.
Michael.
Scenario: I'm working on a rails app that will take data entry in the form of uploaded text-based files. I need to parse these files before importing the data. I can choose the file type uploaded to the app; the software (Microsoft Access) used by those uploading has several export options regarding file type.
While it may be insignificant, I was wondering if there is a specific file type that is most efficiently parsed. This question can be viewed as language-independent, I believe.
(While XML is commonly parsed, it is not a feasible file type for sake of this project.)
If it is something exported by Access, the easiest would be CSV; particularly since Ruby contains a CSV parser in the standard library. You will have to do some work determining the dialect of CSV (what it uses for delimiter, how it handles quotes); I don't know how robust the ruby parser is with those issues, but you also should have some control from Microsoft Access.
You might want to take a look at JSON. It's a lightweight format, and in contrast to XML it's really easy and clean to parse without requiring a huge library on the backend.
It can represent types like strings, numbers, assosiative arrays (objects), and lists of such
I would suggest n-SV (where n is some character) for data that does not include n. That will make lexing the files a matter of a split.
If you have more flexible data, I would suggest JSON.
If you've HAVE to roll your own parser, I would suggest CSV or some form of a delimiter separated format.
If you are able to use other libraries, there are plenty of options. JSON looks quite fascinating.