What is the difference between SQLAlchemy Core and ORM? - sqlalchemy

What is the difference of purpose between SQLAlchemy Core and SQLAlchemy ORM?

The ORM is, as the name implies, an object-relational mapper: Its purpose is to represent database relations as Python objects.
The core is a query builder. Its purpose is to provide a programmatic means to generate SQL queries (and DDL) -- but the results of those queries are just tuples (with a little extra magic), not objects of your own ("your" being, in this case, "the application developer's") design.
In general, if you're trying to programmatically build queries (particularly based on information only available at runtime), you should be using the core. If you're trying to build your application MVC-style and want database-backed objects to be the "model", you should be using the ORM.

The SQLAlchemy-ORM is developed on top of SQLAlchemy-Core. As we can see the basic architecture of SQLAlchemy, below.
So, if we want we can use SQLAlchemy-Core to execute the raw SQL queries after creating engine. SQLAlchemy engine object provides a set of methods to perform basic operation like connect(), execute(), etc.
for example
engine = create_engine('mysql://scott:tiger#localhost/test')
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print("username:", row['username'])
connection.close()
In case you want the ORM features from SQLAlchemy, you have to use the ORM part. Where you'll be able to create the python Classes which will be treated as table and attributes of the class will be treated as columns. In addition to that, many more methods will be provided by SQLAlchemy to make things easier for Object-relational Mapping.

Related

Adjacency List Model vs Nested Set Model for MySQL hierarchical data?

There are two ways to work with hierarchy data in MySQL:
Adjacency List Model
Nested Set Model
A major problem of the Adjacency List Model is that we need to run one query for each node to get the path of the hierarchy.
In the Nested Set Model this problem does not exist, but for each added node is necessary to give a MySQL UPDATE on all others left and right value.
My hierarchical data is not static data, such as product categories of e-commerce. Are constant registration of users in hierarchical sequence.
In my application, while there are many constants users registration, I also need to get the hierarchical path until reach the first node in the hierarchy.
Analyzing my situation, which of the two alternatives would be best for my application?
The Nested Set Model is nowdays not commonly used in databases, since it is more complex than the Adiacency List Model, given the fact that it requires managing two “pointers” instead of a single one. Actually, the Nested Set Model has been introduced in databases when it was complex or impossible to do recursive queries that traversed a hierarchy.
From 1999, standard SQL include the so called Recursive Common Table Expressions, or Recursive CTE, which makes more simple (and standardized!) to make queries that traverse recursive path within a hierarchy with any number of levels.
All the major DBMS systems have now included this feature, with a notably exception: MySQL. But in MySQL you can overcome this problem with the use of stored procedures. See, for instance, this post on StackOverflow, or this post on dba.stackexchange.
So, in summary, these are my advices:
If you can still decide which DBMS use, consider strongly some alternatives: for instance, if you want to stick with an open source database, use PostgreSQL, use the Adiacency List Model, and go with Recursive CTEs for your queries.
If you cannot change the DBMS, still you should go with the Adiacency List Model, and use stored procedures as those cited in the references.
UPDATE
This situation is changing with MySQL 8, which is currently in developement and which will integrate Recursive CTEs, so that from that version the Adiacency List Model will be more simple to use.

Symfony MySQL subquery best practice : Native, QueryBuilder or DQL

Here is a simple MySQL query i want to use in a Symfony2 project :
SELECT * FROM
(
SELECT n.sdate, n.edate FROM `news` n
UNION
SELECT ss.sdate, ss.edate FROM `stagesession` ss
) AS sub
ORDER BY sub.sdate
In fact, this query will be a little more complicated, with more aliases, filter and joins with other tables.
Do I have to convert it in a DQL query, with the createQueryBuilder, or the best way is simply to use createNativeQuery from doctrine ?
My personal Best Practice with Doctrine is:
Query (QB vs. DQL vs. SQL):
use QB if building your query is more conditional than just passing some parameters, like if($onlyActive) $qb->andWhere('x.type = 5'); (I don't like string concat stuff)
use QB for compatibility reasons to pagination toolkits
use DQL for simple selects
use SQL if DQL-query not possible (e.g. DB-native expressions MySQL/Oracle/MSSQL, some weird statistics or hacky queries with UNION or huge subqueries)
at least you can also use SQL, if you're using a small data subset of a very huge DB (like writing some plugin software), because else if the database schema is quite small, you could create some entities from it and revalidate them (for example when you deploy) as a system-test. But if it's too complicated then QB or DQL would also be overkill for accessing such a database, because you have to define entities to work.
Result (orm vs. flat):
use ORM in business code wherever possible to have max. readable code (consider lazy loading)
use ORM in complicated nested views (no huge tables) to have nice clean code in your template (consider eager loading)
use flat arrays for read-only tables/lists
use flat arrays for optimization reasons when dealing with lot's of data (and caching not possible)
And always keep in mind, that you should first write simple code and iff it's to slow, optimize it with eager/lazy loading, Query/Result caching, HTTP caching and at least if you e.g. deal with some database synchronization or data importer you may have use flat arrays or fall back to native implementations, but don't underrate ORM ;).

SQLAlchemy Sessions and other queries

Looking a sqlalchemy tutorials, I quite often see querys in the form of
SomeClass.query.filter(...)
And then often with a session object
session.query(SomeClass).count()
What is the deal with the first notation, I thought I would always need a session to retrieve data from the database.
First notation is a shortcut when using Contextual Session API. When using declarative extension it is convenient to specify it on the Base, however one can apply it to any model class without need for declarative.
In order to enable it, one should first set it up using scoped_session.query_property, usually like below:
Session = scoped_session(sessionmaker(bind=engine))
Base.query = Session.query_property()

How to make POCO work with Linq-to-SQL with complex relationships in DDD

I am struggling to find a way to make POCOs work with Linq-to-Sql when my domain model is not table-driven - meaning that my domain objects do not match-up with the database schema.
For example, in my domain layer I have an Appointment object which has a Recurrence property of type Recurrence. This is a base class with several subclasses each based on a specific recurrence pattern.
In my database, it makes no sense to have a separate AppointmentRecurrences table when there is always a one-to-one relationship between the Appointment record and its recurrence. So, the Appointments table has RecurrenceType and RecurrenceValue columns. RecurrenceType has a foreign key relationship to the RecurrenceTypes table because there is a one-to-many relationship between the recurrence type (pattern) and the Appointments table.
Unless there is a way to create the proper mapping between these two models in Linq-to-Sql, I am left with manually resolving the impedence mismatch in code.
This becomes even more difficult when it comes to querying the database using the Specification pattern. For example, if I want to return a list of current appointments, I can easily create a Specification object that uses the following Expression: appt => appt.Recurrence.IsDue. However, this does not translate into the Linq-to-SQL space because the source type of the Expression is not one that L2S recognizes (e.g. it's not the L2S entity).
So how can I create the complex mapping in Linq-to-SQL to support my domain model?
Or, is there a better way to implement the Specification pattern in this case? I'd thought about using interfaces that would be implemented by both my domain object and the L2S entity (through partials) but that's not possible with the impedence mismatch of the two object graphs.
Suggestions?
Unfortunately, Linq to SQL pretty much forces you into a class-per-table model, it does not support mapping a single entity class to several database tables.
Even more unfortunately, there are very few ORM's that will support more complicated mappings, and vanishingly few that do and offer decent LINQ support. The only I'm even remotely sure of is NHibernate (our experiences with Entity Framework rate it really no better than L2S in this regard).
Also, trying to use the specification pattern in LINQ expressions is going to be quite the challenge.
Even with ORM's, and even with a really strong abstracting ORM like NHibernate, there is still a large impedence mismatch to overcome.
This post explains how to use the specification pattern with linq-to-sql. The specifications can be chained together which builds up an expression tree that can be used by your repository and therefore linq-to-sql.
I haven't tried implementing it yet, but the linq-to-entities version is on my to-do list for a project I am currently working on.

Challenges with Linq to sql concept in dot net

Let say if I used the Linq to Sql concept to interact with database from C# language , then what challenges I may be face? means in terms of architecture, performance , type safety, objects orientation etc ..!
Basically Linq to SQL generates a class for each table in your database, complete with relation properties and all, so you will have no problems with type safety. The use of C# partials allows you to add functionality to these objects without messing around with Linq to SQLs autogenerated code. It works pretty well.
As tables map directly to classes and objects, you will either have to accept that your domain layer mirrors the database design directly, or you will have to build some form of abstraction layer above Linq to SQL. The direct mirroring of tables can be especially troublesome with many-to-many relations, which is not directly supported - instead of Orders.Products you get Order.OrderDetails.SelectMany(od => od.Product).
Unlike most other ORMs Linq to SQL does not just dispense objects from the database and allow you to store or update objects by passing them back into the ORM. Instead Linq to SQL tracks the state of objects loaded from the database, and allows you to change the saved state. It is difficult to explain and strange to understand - I recommend you read some of Rick Strahls blogposts on the subject.
Performance wise Linq-to-SQL does pretty good. In benchmarking tests it shows speeds of about 90-95% of what a native SQL reader would provide, and in my experience real world usage is also pretty fast. Like all ORMs Linq to SQL is affected by the N+1 selects problem, but it provides good ways to specify lazy/eager loading depending on context.
Also, by choosing Linq to SQL you choose MSSQL - there do exist third party solutions that allow you to connect to other databases, but last time I checked, none of them appeared very complete.
All in all, Linq to SQL is a good and somewhat easy to learn ORM, which performs okay. If you need features beyond what Linq to SQL is offering, take a look at the new entity framework - it has more features, but is also more complex.
We've had a few challenges, mainly from opening the query construction capability to programmers that don't understand how databases work. Here are a few smells:
//bad scaling
//Query in a loop - causes n roundtrips
// when c roundtrips could have been performed.
List<OrderDetail> od = new List<OrderDetail>();
foreach(Customer cust in customers)
{
foreach(Order o in cust.Orders)
{
od.AddRange(dc.OrderDetails.Where(x => x.OrderId = o.OrderId));
}
}
//no seperation of
// operations intended for execution in the database
// from operations intended to be executed locally
var query =
from c in dc.Customers
where c.City.StartsWith(textBox1.Text)
where DateTime.Parse(textBox2.Text) <= c.SignUpDate
from o in c.Orders
where o.OrderCode == Enum.Parse(OrderCodes.Complete)
select o;
//not understanding when results are pulled into memory
// causing a full table load
List<Item> result = dc.Items.ToList().Skip(100).Take(20).ToList();
Another problem is that one more level of separation from the table structures means indexes are even easier to ignore (that's a problem with any ORM though).