Alias in Django ORM - mysql

I recently started working on a long-running MySQL-backed Django website. The code is mostly clean and well-written except for one patch in the database that seems to have been hurriedly applied.
Specifically, one of the Models was named tbl_Badge_data. As per the naming conventions applied to the rest of the website, this Model should be named Badge.
Unfortunately, there has already been too much built upon the wrong name, and it is infeasible to change the existing scripts (these include Django queryset operations as well as SQL statements). Replacing all instances of the wrong name is not possible, as not all code is owned by us; there are other users relying on this data.
Is there any way to alias Badge to tbl_Badge_data so that all future development uses the correct name? If yes, how will this impact the name of the underlying table? If not, what is the best way to handle something like this without introducing a performance hit (e.g. a "proxy table" or "proxy model" which is a one-to-one map)? As you can imagine, my team doesn't want to invest time in this non-issue (and probably justifiably) want me to ignore this.

A proxy model is not a one-to-one map; it has no representation in the database, and does exactly what you want.
Even simpler though is just to define an alias in your models; after the definition of tbl_Badge_data, you can just do:
Badge = tbl_Badge_data
and import Badge wherever you need it.

Related

Is it possible to create a SQL data base using a core data schema

I have a core data schema file with relationships between the entities.
I need to create a SQL database and would like to know if it can be created automatically (MySql or MS-SQL) using only this file.
Looking at the SQLite DB I see that the relationships are not mapped in any logical way.
First, your assessment that the relationships are "not mapped in any logical way" is not correct. If you look carefully at the Core Data generated database you will discover that the relationships are mapped exactly as in any other old relational database scheme, i.e. with foreign keys referring to rows in other tables.
Also, the naming conventions in these SQLite databases are very transparent (e.g., entity and attribute names start with Z, etc.
That being said, I would strongly discourage you to hack the Core Data generated database file, or even to use it to inform another database scheme, the reason being that these are undocumented features that could change any time without notice and thus break any code you write based on them.
IMO, the most practical thing to do is to rewrite the model quickly in the usual MySQL schema format and update it manually as well when you change the managed object model.
If you would like to automate the process, there is a rich set of APIs provided for interpreting and parsing NSManagedObjectModel, including classes like NSEntityDescription, NSAttributeDescription etc. You could write a framework that iterates though your entities and attributes and generates a text file that is a readable schema for MySQL, complete with information about indexing, versions etc..
If you go down that route, please make sure to notify us and do post your framework on Github for the benefit of others.
If you use Core Data you can create an SQL based database using a schema file but its structure is entirely controlled by the Core Data framework. Apple specifically tell us as developers to leave it alone and do not edit it using libsqlite or any other method. If you do then Core Data won't have anything to do with you!
In terms of making your own DB using one of Apple's schema files, I'm sure it is possible, but you'd have to know the inner workings of the Core Data framework to even attempt it.
In terms of making your own SQLite DB then you have to sort out all the relationships and mapping yourself.
I think that mixing and matching Core Data resources and custom built SQLite databases is probably a headache waiting to happen. I have used both methods and find that Core Data is brilliant (especially with iCloud) as long as you're OK with your App being limited to Apple only.

Putting Rails over top of an existing database

I have an application written in PHP/MySQL (symfony, to be specific) that I'd (potentially) like to rewrite in Rails. I know how to create scaffolding for tables that don't exist yet, but how do I get Rails to read my existing table structure and create scaffolding based on that?
Update: it turns out I can run the following command to get Rails to generate models for me:
rails generate scaffold Bank --no-migration
But it doesn't give me forms. I would prefer something that gives me forms.
The answer is db:schema:dump.
http://guides.rubyonrails.org/migrations.html
The easiest route is to pretend that you are writing a fresh app with a similar database schema - you can then create your models and migrations with the old schema in mind, but without being restricted by it. At a later stage, you can create a database migration script to copy all the old data into the new schema.
I'm doing this right now. The benefit of this approach is that you can take advantage of all of the rapid development tools and techniques provided by Rails (including scaffolds) without being slowed by trying to retrofit to the exact same schema.
However, if you do decide that you don't like this approach, and you do need to map your new models to existing tables, there are a number of configuration options provided by active record where you can override the convention over configuration naming patterns and map model names to tables names, set oddly named ID fields etc. For example:
class Mammals < ActiveRecord::Base
set_table_name "tbl_Squirrels"
set_primary_key :squirrel_id
end
The above will help Rails attempt to read your existing table, but success will depend upon how well the existing table structures matches Rails conventions. You may have to supply more configuration information to get it to work, and even then it might not work.
Finally, it may be worth considering the use of DataMapper which I believe is more suited to existing brownfield databases than ActiveRecord, because it allows you to map everything, but of course you will need to learn that API if you don't already know it.

Is To Manually Assign An Entity's IDs A Good Idea?

We're developing a system to replace an old application from our clients.
Actually there are many entities (like merchants, salesmen, products, etc) that must have a manual assigned ID -so they can be integrated with other existing systems. i.e. accounting.
We think the best solution is simply allow the user to assign the entities IDs manually when the entity is being created; we're going to suggest him the next available ID, and the user will be able to change it if they want. no updates allowed! (muahahaha)
We'll be glad to hear your thoughts about. Pros / Cons
Thanks in advance :)
PD: Do you know any documentation about? -Entities and IDs-
UPDATES
We think there should be cases when this applies and do not. so...
Additionally there are cases when the client literally wants that a given entity has an Id they bring. organization internal codes I think.
Never, ever, ever let the user have access to assign or create the underlying objects identifier. These must be system maintained.
Imagine the nightmare of trying to figure out which entity a related entity actually goes with in the event that the user picks an id that is already in use.
Instead, you should have a regular entity ID of some type (int, guid, whatever) that the system assigns and uses for links to all dependent objects. Then have an "external" id of some sort that the user can put their own identifier in.
Maybe that relates to an external system in some way, maybe not. Point is, you'll be able to maintain your own consistency regardless of what they do.
What is usually done is to specify your own, usually hidden, identifier that is used internally. Then create a second identifier that the user can use as a separate data field. Also, you can get into a bit of a concurrency trouble if you try to suggest the next ID.
I guess this boils down to a separation of concern issues. If you've been developing apps for more than a year you should already know why you need an object identifier in your rdb's, if your users need to concern themselves with assigning and/or managing your underlying object identifiers then you have a design problem, and this problem is that you are mixing data access/storage concerns with business logic. As others already said the best approach is to have two identifiers, one that is transparent to the user and used internally in your app and the other to be used in your integration processes.
I've seen a similar case while working with retailers, there's an id and a product's SKU, the sku itself could be the id yet to allow for a good design we have both.

Best way to create a SPARQL endpoint for a RDBMS (MySQL database)

I am doing (want to do) some experiments with Linked Open Datasets particularly those put out by governments.
I have a RDBMS (more specifically MySQL). I designed it with semantic web ideas in mind i.e. I have a information stored as objects, predicates and classes which define objects. In turn all objects are related to each other though statements of the form subject --> predicate --> object (where the subjects are from the objects table).
I want to be able to query other RDF triple stores from my application and let other triple stores query my data. Is it possible to "set something up" so that this is possible?
I have looked at Jena. Using Jena seems to mean I have to it as a storage application rather than MySQL - the only problem with this is that I include a new concept called a category (which I don't think is part of the semantic web languages). I will use categories to help with displaying information (they don't have any other meaning) but using Jena seems to mean that I can't organise predicates under categories for more convenient viewing.
I am using Java so a JAVA API is preferred.
It's also possible I misunderstood the purpose of Jena, and maybe that can be of use, but I am not sure how.
I am sure four days from now this question will seem rather silly, but at the moment I am somewhat confused about how to proceed.
I'm not sure what you mean by "a new concept called category", perhaps you can give an example?
If you mean that you want to add additional metadata, perhaps as a way of organizing information in the user interface, there is no need to extend the semantic web languages or storage systems - they can already do what you want.
Suppose you have data for a school from the UK Government schools dataset (using Turtle encoding for brevity):
#prefix sch-ont: <http://education.data.gov.uk/def/school/>.
<http://education.data.gov.uk/id/school/135412>
a sch-ont:School;
sch-ont:establishmentStatus
<http://education.data.gov.uk/def/school/EstablishmentStatus_Open>;
sch-ont:MSOA <http://statistics.data.gov.uk/id/msoa/E02000001>;
sch-ont:establishmentName "Guildhall School of Music and Drama";
...
You can directly query that data from the SPARQL end-point, or you can download the data and store it locally in your own triple store. Either way, you're perfectly at liberty to add extra information that's useful to your users. For example:
#prefix ankurs-app: <http://ankur.org/example/app/vocab/display#>.
<http://education.data.gov.uk/id/school/135412>
ankurs-app:category ankurs-app:wkdCool.
You can store this new triple in the same graph as the downloaded data, or you can store it in a separate named-graph to indicate that it's information that has a different provenance than the source data. Either way, it's then simple to query it either programmatically from Jena, or via a SPARQL query.
Doing a layout for efficiently querying schemaless triple-centric data is a well-studied, and hard, problem. Most of the RDF platforms, including Jena, have well-optimised code for querying and updating triples from their own database schemes. You would have to have very good reasons for embarking on your own relational table layout :)
If you really do need to take an existing relational table scheme and map it to a Jena RDF model, look at D2RQ.
Why didn't you just use a triple store to store all of your data? If you use a triple store with SPARQL endpoint capability then you would have a SPARQL-accessible web api. Similarly, many other data sets on the web are exposed as SPARQL endpoints and accessible via HTTP.
There are many triple stores available with persistent storage both in a db and otherwise (Jena + SDB, Mulgara, Virtuoso, Oracle, etc). You could certainly extend Mulgara through their resolvers to support queries against your custom db but I think that's probably a lot of work for not too much real value.
I'm sure you could use existing concepts to handle your notion of categories in RDF or perhaps by layering something over Jena.

What is the best way to build a data layer across multiple databases?

First a bit about the environment:
We use a program called Clearview to manage service relationships with our customers, including call center and field service work. In order to better support clients and our field technicians we also developed a web site to provide access to the service records in Clearview and reporting. Over time our need to customize the behavior and add new features led to more and more things being tied to this website and it's database.
At this point we're dealing with things like a Company being defined partly in the Clearview database and partly in the website database. For good measure we're also starting to tie the scripting for our phone system into the same website, which will require talking to the phone system's own database as well.
All of this is set up and working... BUT we don't have a good data layer to work with it all. We moved to Linq to SQL and now have two DBMLs that we can use, along with some custom classes I wrote before I'd ever heard of Linq, along with some of the old style ADO datasets. So yeah, basically things are a mess.
What I want is a data layer that provides a single front end for our applications, and on the back end manages everything into the correct database.
I had heard something about Entity Framework allowing classes to be built from multiple sources, but it turns out there can only be one database. So the question is, how could I proceed with this?
I'm currently thinking of getting the Linq To SQL classes all set for each database, then manually writing Linq compatible front ends that tie those together. Seems like a lot of work, and given Linq's limitations (such as not being able to refresh) I'm not sure it's a good idea.
Could I do something with Entity Framework that would turn out better? Should I look into another tool? Am I crazy?
The Entity Framework does give a certain measure of database independence, insofar as you can build an entity model from one database, and then connect it to a different database by using a different entity connect string. However, as you say, it's still just one database, and, moreover, it's limited to databases which support the Entity Framework. Many do, but not all of them. You could use multiple entity models within a single application in order to combine multiple databases using the Entity Framework. There is some information on this on the ADO.NET team blog. However, the Entity Framework support for doing this is, at best, in an early stage.
My approach to this problem is to abstract my use of the Entity Framework behind the Repository pattern. The most immediate benefit of this, for me, is to make unit testing very simple; instead of trying to mock my Entity model, I simply substitute a mock repository which returns IQueryables. But the same pattern is also really good for combining multiple data sources, or data sources for which there is no Entity Framework provider, such as a non-data-services-aware Web service.
So I'm not going to say, "Don't use the Entity Framework." I like it, and use it, myself. In view of recent news from Microsoft, I believe it is a better choice than LINQ to SQL. But it will not, by itself, solve the problem you describe. Use the Repository pattern.
if you want to use tools like Linq2SQl or EF and don't want to have to manage multiple DBMLS (or whaetever its called in EF or other tools), you could create views in your website database, that reference back to the ClearView or Phone system's DB.
This allows you to decouple your web site from their database structure. I believe Linq2Sql and EF can use a view as the source for an Entity. If they can't look at nHibernate.
This will also let you have composite entities that are pulled from the various data sources. There are some limitations updating views in SQL Server; however, you can define your own Instead of trigger(s) on the view which can then do the actual insert update delete statements.
L2S works with views, perfectly, in my project. You only need to make a small trick:
1. Add a secondary DB table to the current DB as a view.
2. In Designer, add a primary key attribute to a id field on the view.
3. Only now, add an association to whatever other table you want in the original DB.
Now, you might see the view available for the navigation.