Question on Ethereum ERC standard ERC1400 - ethereum

According to definition of ERC1400 here
https://github.com/ethereum/eips/issues/1411
it has the ERC-1643: Document Management Standard.
now my question is, How do these documents/files stored?
are they stored in the contract as a hash?
are they uploaded somewhere, if so, where?
Also, while were at it, were can I find sample codes/resources for various ERC standards as well, I dont seem to find any sample codes on theses ERC standards everywhere :(

Note that both standard proposals (ERC-1400 and ERC-1643) are currently (April 2021) still in the draft phase (for over 2 years since they were created). Meaning they haven't been approved by the core team, and not many developers are going to follow an unapproved standard (or publish code samples).
How do these documents/files stored?
The draft of ERC-1643 only defines an interface (function names, argument datatypes, ...), not the actual implementation (how to store the data - that's up to each developer to implement to their use case).
The string _uri that the standard defines as one of the arguments, can be:
IPFS (decentralized storage, my guess is that this is going to be the most common use case)
off-chain file sharing service such as Google Drive
or it can even be an URL accessible only on some private networks
basically any valid URI (so even ftp://, skype:// or tel:// link)

Related

Populating a Maximo field using a db function: Why is this a bad practice?

In a separate SO question, I asked how to populate a Maximo field using a db function:
Take value from FieldA, send to a db function, and return value to FieldB
A Stack Overflow community member was kind enough to answer the question and provided this advice:
And all that said, you should just use the automation script to do
what you have the database function doing, if at all possible. To be
more blunt, what you are wanting to do is not considered good
practice. So, make sure to include in your script's comments your
justification for not following good practice.
If we assume that there aren't any out-of-the-box methods for doing what I want (Spatial Query), then why would referencing a database function from Maximo be a bad practice?
(Bear in mind that I'm new to the IT industry. I would benefit from layman's terms.)
I can be a little verbose, so I'll apologize up front for that. And it may seem like I'm wandering, but I'll try to bring it back together at the end.
As I said in my answer to your first question, Take value from FieldA, send to db function, return value to FieldB, calling a stored procedure (or stored function or whatever) from an automation script is not "good" practice. That isn't to say, dogmatically, that it shouldn't ever be done, but to say that, as a rule, it should be avoided. When making an exception to the good practice rule is the best way to solve a particular problem, your code should document why you chose (or were forced) to make an exception. And I stand by that answer to your first question, which made no mention of a special circumstance.
If there are no out-of-the-box configuration options for doing what you want, such as crossovers or relationships or domains or etc in Maximo, then your next option should be in-product customization options (also known as "small 'c' customizations), if they exist. It so happens that in the case of Maximo you have "automation scripting" or "autoscripting" in Python or JavaScript, with all (Java) classes in the JVM's / server's classpath at your disposal (possibly including Maximo Spatial's Java class methods), for an in-product customization option. Using examples from Maximo 76 Scripting Features, you can even figure out how to call RESTful APIs, like those exposed by ESRI's ArcGIS, over HTTP or HTTPS.
If in-product (small "c") customizations don't work don't work well enough (such as causing performance problems), then it is generally acceptable, though not supportable, to customize the product itself (aka a big "C" customization). (Generally acceptable, as many companies would accept that rationale for developing a big "C" customization, but not supportable, as the vendor will ask you to remove your Customization and reproduce your problem if a problem is found and if it is at all conceivable that your Customization could be contributing to the problem in any way.) In the case of Maximo, writing your own Java classes or stored procedures are generally considered big "C" customizations.
In the case of Maximo, and you could probably generalize that to any COTS product, updating Maximo data from a stored procedure is considered exceptionally bad practice. This is because such updates are not subjected to Maximo's business rules and logic, which can lead to data integrity problems, support problems, and more. In particular, triggers often assume that Maximo has made database updates in a particular order (parent data being inserted before child data, for example) when its documentation explicitly disclaims commitment to such order. (If it doesn't anymore, it used to.)
All that in mind, if out of the box Maximo doesn't provide a configuration for doing what you need, and if you can't use autoscripting to do what you want, even with access to all of Maximo's and Java's libraries (in that order of preference), then it would be acceptable to use an automation script to call a database function to calculate a value for you to store via Maximo. In fact, in that scenario, calling a function from your script would be far better than having a trigger set the value, because, assuming you update Maximo via it's API, such as mbo.setValue("attribute","value"), your script will still leave the auditing, security, validation, data integrity, and other business rules in operation. As a bonus, any professional Maximo consultants (like me) you bring on to help with projects will waste less time (read: your money) trying to figure out what you are doing and why so they don't break it.
I hope that helps.

Why does this model fail?

Here is the data set
https://gist.github.com/kirkstrobeck/d8b768867890807f9dc9
When using Google Prediction API it will go from RUNNING for about 30 minutes, then ERROR: INTERNAL ERROR.
Why does it fail? It seems to be a standard consumable regression model data set.
When attempting to answer this question, I looked at the API you speak of as well as its requirements. These requirements lie in the file format and how the text in said file is formatted. The first thing I will point out is that the Google Prediction API that "is uploaded to Google Cloud Storage as a CSV (comma-separated value) file." Your file is a TXT(at least on GitHub), but appears to have the correct structure of a CSV. However, when you take a look at the standards for this filetype, almost everyone has a different way they want it done. In the case of Google, they have very strict requirements on the file format(they also have some good examples here: cloud.google.com/prediction/docs/developer-guide#examples). Long story short, you shouldn't have spaces between your columns, it might cause an error in the processing seeing how it doesn't match the Wikipedia standards or Google's requirements.
EDIT: Sorry about the weird link stuff, I don't have enough rep to do more than two yet.

What's the point of oEmbed API endpoints and URL schemes vs. link tags?

The oEmbed specification mentions 2 different ways of finding the oEmbed content of an URL:
Knowing the API endpoint of the website and passing it, through a GET parameter, the URL you want info about, if it matches the URL pattern it declared.
Discovering the URL of the oEmbed version thanks to a <link rel="alternate" type="application/json+oembed" ... /> (or text/xml+oembed) HTML header.
The 2nd ways seems more generic, as you don't have to store and maintain a whole list of providers. Moreover, lists of providers are the sign of a centralized internet, where only a few actors exist. This approach is hardly scalable.
I can see a use for the 1st approach, though, for websites that can parse resources made available by someone else. For example, I can provide an oEmbed version of video pages from website Foo. However, for several reasons, mainly security-related, I wouldn't trust someone who says "I can parse resource X for you" unless X's author is OK with that, which brings us back to approach 2.
So my question is: what did I miss here? What's the use of the 1st method of dealing with oEmbed? For instance, why store (and maintain up-to-date) a whole list of endpoints and patterns like oohEmbed does if you have a generic way of discovering it on-the-fly and for virtually any resource on the internet?
As a very closely related question, which I think may be asked at the same time (please correct me if I'm wrong): what happens if one doesn't provide a central endpoint for oEmbed contents, but rather, say, expect a '?version=oembed' parameter on each URL, that returns the oEmbed version instead of the standard one?
If I recall correctly, supporting both mechanisms was a compromise that we figured would help drive adoption. It's much easier to persuade large web properties to add a single endpoint vs. adding markup (that's irrelevant to most clients) to every response body. It was a pragmatic choice.
Longer term we planned to leverage some of the work Eran Hammer-Lahav was doing around discovery rather than re-inventing it (poorly, again). Unfortunately, his ideas still haven't gotten much traction and the web still lacks a good, standardized way to do this sort of thing.
I was hoping to find an answer here but it looks like everyone else is as confused as we are. The advantage of using option 1 in my opinion is that it only uses 1 json request instead of a potentially expensive html request followed by the json request. You can always use option 2 as a fallback in case you can't match a pattern in your pre-baked list of oEmbed providers.
OEmbed discovery is a major security concern. WordPress for example has a whitelist of supported OEmbed providers.
Suppose that every random URL at the internet can trigger an OEmbed code. That means everyone can hack your site.
Steps:
Create a new site, add an OEmbed discovery.
Post the URL to a form at your site. Now your site perform the OEmbed on my behalf.
Exploit:
by denial-of-service (DOS): e.g. redirect the URL to a tarpit or feed it a 1GB json response.
by cross site scripting (XSS): inject random HTML to pages that other people can see.
by stealing the admin's session-cookie via XSS: now the attacker can login to your CMS to upload files, and exploit even more.
It's XSS to the max, with little to stop it. The only sane thing to do, is whitelisting proper endpoints. That's the oEmbed endpoints are explicitly listed.
If you want something scalable, you might like www.noembed.com and www.embedly.com They provide OEmbed support for various sites which don't do OEmbed themselves.

Publishing an application in public domain [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
During my last internship, I took an open source tool and enhanced it as a part of my project. Because of my growing interest in that tool, I took it home and added some more functionalities to make it more useful for others outside and then thought of publishing it.
The original source code is available in public domain without any restrictions, but since I worked on this tool during my internship, I wanted to know whether I have to take permission from my employer before publishing it.
Although I want to publish it, my mind tells me NO, as the code is now a property of my employer.
Edit 1:
This is what the original tool writer says about the code:
"This code is released into the public domain without restriction"
Yes, if you changed it as part of your work for a company, then it is that company that owns the copyright for what you do. So you cannot publish your changes without permission from your employer. If you were modifying a freely available tool, though, you may want to ask your employer for permission to publish the code; many employers will allow this if it wouldn't significantly affect them to do so.
You say that the original source code was in the public domain. That's fairly rare; most of the time the original source code is still copyrighted, but available under a free license (and sometimes, code is posted online without any license listed, which actually means that it is copyrighted and no permission is given for you to make any copies of it or modify it in any way). So, be careful that you are not accidentally violating anyone's copyright by modifying and distributing the code, or that you are complying with any license conditions on it if there are any.
It really depends on what kind of contract you signed with the company that you interned at before joining. Most contracts would make the IP you added to the project the company's IP hence legally you are not allowed to publish it as your own.
You also need to keep in mind that most open-source license agreements necessitate that you release any derivatives under the same licensing agreement. Hence, you wouldn't always be able to commercially publish something that had an open-source component, unless you released your code (or part of your code) as open source.
Usually software written at work is the property of the work place. You should ask for a permission, and then you can release it as an open source project.
As for the open source license, see the various licenses
IANAL, but if you're ABSOLUTELY and CLEARLY sure that the code is in the public domain, you can do whatever you like with it. Any entity, corporate, non-corporate, commercial, non-commercial, individual or group, that releases ANYTHING into the public domain has waived their right to claim copyright on whatever they release. Since it does say in the license file that the code is in the public domain (and with emphasis on 'no restrictions'), it is your legal right and entitlement to do whatever you like with it.

Tools to help reverse engineer binary file formats

What tools are available to aid in decoding unknown binary data formats?
I know Hex Workshop and 010 Editor both support structures. These are okay to a limited extent for a known fixed format but get difficult to use with anything more complicated, especially for unknown formats. I guess I'm looking at a module for a scripting language or a scriptable GUI tool.
For example, I'd like to be able to find a structure within a block of data from limited known information, perhaps a magic number. Once I've found a structure, then follow known length and offset words to find other structures. Then repeat this recursively and iteratively where it makes sense.
In my dreams, perhaps even automatically identify possible offsets and lengths based on what I've already told the system!
Here are some tips that come to mind:
From my experience, interactive scripting languages (I use Python) can be a great help. You can write a simple framework to deal with binary streams and some simple algorithms. Then you can write scripts that will take your binary and check various things. For example:
Do some statistical analysis on various parts. Random data, for example, will tell you that this part is probably compressed/encrypted. Zeros may mean padding between parts. Scattered zeros may mean integer values or Unicode strings and so on. Try to spot various offsets. Try to convert parts of the binary into 2 or 4 byte integers or into floats, print them and see if they make sence. Write some functions that will search for repeating or very similar parts in the data, this way you can easily spot headers.
Try to find as many strings as possible, try different encodings (c strings, pascal strings, utf8/16, etc.). There are some good tools for that (I think that Hex Workshop has such a tool). Strings can tell you a lot.
Good luck!
For Mac OS X, there's a great tool that's even better than my iBored: Synalyze It!
(http://www.synalysis.net/)
Compared to iBored, it is better suited for non-blocked files, while also giving full control over structures, including scriptability (with Lua). And it visualizes structures better, too.
Tupni; to my knowledge not directly available out of Microsoft Research, but there is a paper about this tool which can be of interest to someone wanting to write a similar program (perhaps open source):
Tupni: Automatic Reverse Engineering of Input Formats (# ACM digital library)
Abstract
Recent work has established the importance of automatic reverse
engineering of protocol or file format specifications. However, the
formats reverse engineered by previous tools have missed important
information that is critical for security applications. In this
paper, we present Tupni, a tool that can reverse engineer an input
format with a rich set of information, including record sequences,
record types, and input constraints. Tupni can generalize the format
specification over multiple inputs. We have implemented a
prototype of Tupni and evaluated it on 10 different formats: five
file formats (WMF, BMP, JPG, PNG and TIF) and five network
protocols (DNS, RPC, TFTP, HTTP and FTP). Tupni identified all
record sequences in the test inputs. We also show that, by aggregating
over multiple WMF files, Tupni can derive a more complete
format specification for WMF. Furthermore, we demonstrate the
utility of Tupni by using the rich information it provides for zeroday
vulnerability signature generation, which was not possible with
previous reverse engineering tools.
My own tool "iBored", which I released just recently, can do parts of this. I wrote the tool to visualize and debug file system formats (UDF, HFS, ISO9660, FAT etc.), and implemented search, copy and later even structure and templates support. The structure support is pretty straight-forward, and the templates are a way to identify structures dynamically.
The entire thing is programmable in a Visual BASIC dialect, allowing you to test values, read specific blocks, and all.
The tool is free, works on all platforms (Win, Mac, Linux), but as it's personal tool which I just released to the public to share it, it's not much documented.
However, if you want to give it a try, and like to give feedback, I might add more useful features.
I'd even open source it, but as it's written in REALbasic, I doubt many people will join such a project.
Link: iBored home page
I still occasionally use an old hex editor called A.X.E., Advanced Hex Editor. It seems to have largely disappeared from the Internet now, though Google should still be able to find it for you. The last version I know of was version 3.4, but I've really only used the free-for-personal-use version 2.1.
Its most interesting feature, and the one I've had the most use for deciphering various game and graphics formats, is its graphical view mode. That basically just shows you the file with each byte turned into a color-coded pixel. And as simple as that sounds, it has made my reverse-engineering attempts a lot easier at times.
I suppose doing it by eye is quite the opposite of doing automatic analysis, though, and the graphical mode won't be much use for finding and following offsets...
The later version has some features that sound like they could fit your needs (scripts, regularity finder, grammar generator), but I have no idea how good they are.
There is Hachoir which is a Python library for parsing any binary format into fields, and then browse the fields. It has lots of parsers for common formats, but you can also write own parsers for your files (eg. when working with code that reads or writes binary files, I usually write a Hachoir parser first to have a debugging aid). Looks like the project is pretty much inactive by now, though.
Kaitai is an open-source language for describing binary structures in data streams. It comes with a translator that can output parsing code for many programming languages, for inclusion in your own program code.
My project icebuddha.com supports this using python to describe the format in the browser.
A cut'n'paste of my answer to a similar question:
One tool is WinOLS, which is designed for interpreting and editing vehicle engine managment computer binary images (mostly the numeric data in their lookup tables). It has support for various endian formats (though not PDP, I think) and viewing data at various widths and offsets, defining array areas (maps) and visualising them in 2D or 3D with all kinds of scaling and offset options. It also has a heuristic/statistical automatic map finder, which might work for you.
It's a commercial tool, but the free demo will let you do everything but save changes to the binary and use engine management features you don't need.