I have a database component that I'm trying to make as general as possible. Is it possible to accomplish this:
Take in a custom class that I don't have the definition for
Recreate that class locally from the foreign instance
Basically I can't include the definition of objects that will be stored in the database but I want the database to process the raw data of whatever class passed in, store it, and be able to provide it again as an object.
Ideally I could cast it back to it's custom class when the Object gets back from the database.
It sounds like what you are asking for is serialization.
Serialzation in AS3 is possible through a few different methods. I recommend you refer to this article as it describes the method quite clearly.
To elaborate, once you serialize your object, you send it to the server and pair it with a key in a database. Then you can serialize it back into the original object by downloading it from the server again.
I think you're going to find that there are a lot of pitfalls with what you want to do. I suspect that you'll find over the long haul that you can solve the problem in other ways, since someone, somewhere needs a definition of the Class you're instantiating (you also need to think about what happens if you have two instances with conflicting definitions).
Probably a better way to go is to make your app more data driven--where every object can be built based on the data about it. If you need to be able to swap out implementations, consider storing the Class definitions in external swfs and downloading those based on paths or other information stored in the database. Again, you need to consider what will happen if the implementations collide between multiple swfs.
Can you expand on what you're trying to do? It's easier to give you clearer instructions with more information.
Related
I want to build an application that uses data from several endpoints.
Lets say I have:
JSON API for getting cinema data
XML Export for getting data about ???
Another JSON API for something else
A csv-file for some more shit ...
In my application I want to bring all this data together and build views for it and so on ...
MY idea was to set up a database by create schemas for all these data sources, so I can do some kind of "import scripts" which I can call whenever I want to get the latest data.
I thought of schemas because I want to be able to easily adept a new API with any kind of schema.
Please enlighten me of the possibilities and best practices out there (theory and practice if possible :P)
You are totally right on making a database. But the real problem is probably not going to be how to store your data. It's going to be how to make it fit together logically and semantically.
I suggest you first take a good look at what your enpoints can provide. Get several samples from every source and analyze them if you can. How will you know which data is new? How can you match it against existing data and against data from other sources? If existing data changes or gets deleted, how will you detect and handle that? What if sources disagree on something? How and when should you run the synchronization? What will you do if one of your sources goes down? Etc.
It is extremely difficult to make data consistent if your data sources are not. As a rule, if the sources are different, they are not consistent. Thus the proverb "garbage in, garbage out". We, humans, have no problem dealing with small inconsistencies, but algorithms cannot work correctly if there are discrepancies. Even if everything fits together on paper, one usually forgets that data can change over time...
At least that's my experience in such cases.
I'm not sure if in the application you want to display all the data in the same view or if you are going to be creating different views for each of the sources. If you want to display the data in the same view, like a grid, I would recommend using inheritance or an interface depending on your data and needs. I would recommend setting this structure up in the database too using different tables for the different sources and having a parent table related to all them that has a type associated with it.
Here's a good thread with discussion about choosing an interface or inheritance.
Inheritance vs. interface in C#
And here are some examples of representing inheritance in a database.
How can you represent inheritance in a database?
I have a very large Json object that i want to put in a nosql database.
I would like to know:
first, how to generate the database schema based on that Json object?
second, is there a way to put this object automatically in the database, without manually specifying which value (in json object) goes in which column (in the database)?
I hope I was clear enough. Thanks!
Since you haven't specified which NoSQL database you're using in particular, for convenience, I'll assume you're using MongoDB when I talk about things that are implementation specific.
First off, you should know that NoSQL databases by nature are "schema-less". You still could implement your own schema (in your app, not the db), but that's optional, and mostly done just for validation purposes or to let future developers understand the planned structure of your data better. Read the Dynamic Schemas section in this article to know more. Here is a SO answer explaining how you would do that in mongoose and here is the official guide/doc for it.
Second, NoSQL databases don't work in terms of columns or rows. Rather, you need to think in terms of collections and documents. So to answer your question : Yes, when you have a JSON object, you shove it in directly (before applying any required formatting if you've implemented a schema like in above). You don't enter data value by value (unless you've intentionally set it up to do so).
It sounds to me that you need to strengthen your fundamental understanding of how NoSQL works as you seem to be confusing yourself with concepts that belong to other DBMS. Here is a neat slideshow to get you started and the previous article I linked you to also gives you a decent introduction.
After you're done, consider installing MongoDB or something similar and just playing around with the command line interface to get a good hang of it.
I have a task: I need create data access layer, which can work with multiple data sources (json files, xml files, sql server). But I just have no any idea, how it should be done.
I have tried create my own context by inheriting DBContext class (something like JsonContext), which contains paths to json files and does I/O operations, but now i think it looks kinda stupid :).
Maybe I can create interface of basic repository and implement it with each data source? Or maybe exists patterns or practices, that can help me?
It's not a bad idea to take the DbContext that EntityFramework generates for you, and use that as your common base class for all of the different data sources (JsonContext inherits from DbContext). However, the problem I see with this approach is that when you instantiate a JsonContext, it will call the constructors of the base class, DbContext, and try to connect to SQL Server, which is not what you want.
I don't know if there is an accepted pattern for doing what you're trying to do, so I think you're probably just going to have to invent your own common interface or base class that all the concrete data sources will have to implement.
For some quite complex unit testing environment, we want to dynamically change the tables contained in the metadata. Removing tables from it are supported using .remove(table) or even .clear(). But how to later re-add such a table?
There is a _add_table(name, schema) method in MetaData, but this doesn't seem to be the official way. Also the Table._set_parent(metadata) seems more appropiate if one has to go the "use internal methods" route.
There also is Table.tometadata(metadata) which creates a new table instance that is attached the the new metadata. So I could create a complete new metadata and attach all "now needed" tables. But that would mean all the remaining code would need to know about the new table instances, connected to the new metadata. I don't want to go this route.
UPDATE: We're now considering fork/multiprocessing to load the tables only in a subprocess (isolated environment) so that only that subprocess is "tainted" and the next tests wont be hurt. I am noting this here for completeness, it's no strictly related to the main question, but might help others who find this question.
mutation of a MetaData object in a non-additive way is barely supported, and overall you shouldn't build use cases on top of it. Using new MetaData objects that contain the schema you're looking for in a particular scenario will work best.
I'm writing a web application using PHP/Symfony2/Doctrine2 and just finishing up the design of the Database. We have to import these objects (for ex. Projects, Vendors) into our database that come from different customers with variety of fields. Some customers have 2 fields in the project object and some have 20. So I was thinking about implementing them in MongoDB since it seems like a good use for it.
Symfony2 supports both ORM and ODM so that shouldn't be a problem, Now my question is how to ensure the integrity of the data in both databases. Because Objects in my MySQL db need to somehow be linked to the objects in the MongoDB for integrity issues.
Are there any better solutions out there? Any help/thoughts would be appreciated
Bulat implemented a Doctrine extension while we were at OpenSky for handling references between MongoDB documents and MySQL records, which is currently sitting in their (admittedly outdated) fork of the DoctrineExtensions project. You'll want to look at either the orm2odm_references or openskyfork branches. For this to be usable in your project, you'll probably want to port it over to a fresh fork of DoctrineExtensions, or simply incorporate the code into your application. Unfortunately, there is no documentation apart from the code itself.
Thankfully, there is also cookbook article on the Doctrine website that describes how to implement this from scratch. Basically, you rely on an event listener to replace your property with a reference (i.e. uninitialized Proxy object) from the other object manager and the natural behavior of Proxy objects to lazily load themselves takes care of the rest. Provided the event listener is a service, you can easily inject both the ORM and ODM object managers into it.
The only integrity guaranteed by this model is that you'll receive exceptions when trying to hydrate a bad reference, which is probably more than you'd get by simply storing an ID of the other database and querying manually.
So the way we solved this problem was by moving to Postgres. Postgres has a datatype called hstore that acts like a NoSQL column. Works pretty sweet
UPDATE
Now that I'm looking back, go with jsonb instead of json or hstore as it allows you to have more of a data structure than a key-value store.