Doctrine2 Integer type with specified length - mysql

I'm making a symfony2 project using Doctrine 2 and i'm using Annotations to map my entities to a MySQL database.
I have read doctrine 2 documentation and it says that the length attribute applies only to the string type.
So my question is, is there a way to set a specific length (so no smallint, bigint and so on...) of an integer column through annotations (other than columnDefinition) and if not...why there isn't any? In Doctrine 1 i could specify a certain specific length for integer types

Because Doctrine is created for manipulating data using itself. And it is not care how this data will be displayed by another programs. But length definition for numeric values exist only for convenient displaying data in DB-managers (mostly for cli mysql client).
In Doctrine you also cannot create triggers for the same reason. You can implement such trigger with doctrine.

Related

How to order by a DC2Type:json_array subfield

I'm working in a existing application and I'm asked to order by a child field of a DC2Type:json_array field in Symfony. Normally I would add the field as a column in the table. In this case this is not possible.
We have a JsonSerializable invoice entity with a normal date attribute. But also a data attribute which contains the due_date. I whould like to order by data[due_date] in a Symfony query. Is this at all possible?
tl;dr: No, not really.
According to Doctrine's type mapping matrix, json_array gets mapped to MySQL's MEDIUMTEXT column type, which by default obviously does not index its contents as json hence provides little to no performance advantage. (also, AFAICT, doctrine doesn't provide any json functionality besides converting db json from and to php arrays/nulls)
Maybe you could magically do some string search magic to extract a value to sort by it, but you still wouldn't get the performance boost a proper index provides. Depending on your data this could get noticably slow (and eat memory).
The JSON data type is fairly "new" to the relational database world and mappers like doctrine have not yet fully adopted it either. Extending doctrine to handle this data type will probably take lots of work. Instead you could rethink your table schema to include all the fields as columns you want to order by to use all benefits a relational database provides (like indexing).

properties table pattern vs storing all properties in json column [duplicate]

This question already has answers here:
When can I save JSON or XML data in an SQL Table
(8 answers)
Storing JSON in database vs. having a new column for each key
(10 answers)
Closed 3 years ago.
I'd like some feedback on having all properties a model can have in a properties table accessed via relationship (using laravel relationships) vs storing all properties/settings in the same table but in a json column.
Currently, my application has a propeties table called settings that is also polymorphic in nature so multiple models can store their properties there. This table has columns like
key (string),
value(string),
type (string) - tells if the value is of string, integer, boolean, json type
so that I do not send strings to javascript frontend, but instead I can send string, integer, boolean native types for better handling of types in frontend. I do this conversion before I send the properties to the frontend using php function that cast string values to int, boolean, json or string, depending on the type.
This means if a model has 40 properties, all get stored in its own row, so creating one model leads to creating 40 rows that store all properties it may have.
Now the above approach vs approach where I just have a single json column, we can call it settings and I dump all these 40 properties there.
What do I win with json column approach? I shave off a table and I shave off an extra relationship that I need to load on this model each time I do some queries. I also shave off having to each time I get properties cast them to integer, boolean, json or string. (remember the type column above) To keep in mind these properties do not need to be searchable, I need them only for reading from them. I will never use them in queries to return posts based on these properties.
Which one is a better idea to use, I'm building a CMS btw you can see it in action here:
https://www.youtube.com/watch?v=pCjZpwH88Z0
As long as you don't try to use the properties for searching or sorting, there's not much difference.
As you said, putting a JSON column in your model table allows you to avoid a JOIN to the properties table.
I think your properties table actually needs to have one more column, to name the property. So it should be:
key (string),
property (string),
value(string),
type (string) - tells if the value is of string, integer, boolean, json type
The downsides are pretty similar for both solutions.
Queries will be more complex with either solution, compared to querying normal columns.
Storing non-string values as strings is inefficient. It takes more space to store a numeric or datetime value as a string than as a native data type.
You can't apply constraints to the properties. No way to make a property mandatory (you would use NOT NULL for a normal column). No way to enforce uniqueness or foreign key references.
There's one case I can think of that gives JSON an advantage. If one of your custom properties is itself multi-valued, there's a straightforward way to represent this in JSON: as an array within your JSON document. But if you try to use a property table, do you store multiple rows for the one property? Or serialize the set of values into an array on one row? Both solutions feel pretty janky.
Because the "schemaless properties" pattern breaks rules of relational database design anyway, there's not much you can do to "do it right." You're choosing the lesser of two evils, so you can feel free to use the solution that makes your code more convenient.

storing big integer in mongo db

I have a rails app and one of my tables has a big integer key in mysql. I am looking to archive some of the data from the mysql table in mongodb, but am not sure which type to use in field statement within mongoid to store the orignal_id, I have no intention to change the id that mongoid will generate, I am not looking to change the _id field of the new table.
If you are using mongoid you can define the field as integer. Integers are instance objects of a Fixnum or a Bignum class in Ruby. If any operation on a Fixnum exceeds its range, the value is automatically converted to a Bignum.
Your only choice for numeric values in JavaScript is Number. The largest integer that can safely be represented is: 9007199254740991. According to the mysql docs, the maximum bigint is 9223372036854775807, twice that if unsigned.
Maybe you should save it as a string if it's important to maintain fidelity.
I do not use rails, but NumberLong may exist in Rails too. This is a wrapper for 64bit integers. With a unique index you may have similar results, as in MySQL.

SQL: What if user don't define any datatype to attributes or keep all the datatypes as varchar? [duplicate]

This question already has answers here:
SQLite table with integer column stores string
(2 answers)
Closed 8 years ago.
I am starting out with SQL. I am learning it using sqlit database.
While practicing create table i noticed that even if i don't define any datatype a table is created and I am able to execute all insert and select command.
Also if i define all the datatypes to varchar it works well.
Please tell me is this the right way?
The SQL standard defines field types, there are different operations available for each type, so no, setting all to varchar is not the right way.
On the other hand, SQLite uses a different type system, in effect it's similar to dynamically typed languages, where the type is associated with the values, not the variables. In SQLite you can store an integer in a field declared as a varchar, and it will not only work, but remember that it was an integer and operate as an integer.
For an embedded library, it's a very practical system but deviates from the standard, so if you want to learn "SQL", then it's better not to rely on SQLite's peculiarities.

Is there a way to override the nvarchar(max) with a set maxlength value using EF 4.2 codefirst configuration?

By default EF 4.2 codefirst sets the database column behind string properties to nvarchar(max).
Individually (per property) I can override this convention by specifying the [MaxLength(512)] attribute.
Is there a way of globally applying a configuration that does this? It seems that the configuration api on modelbuilder only allows for per entity overrides and that the convention api on modelbuilder only allows removes. See this question.
No there is no global configuration available. Custom conventions were removed from EF Code First in CTP phase.
I think you can do that. Just above your property in the model table add the StringLength Attribute. Just like following. (Remember you will need to include using System.ComponentModel.DataAnnotations;)
[StringLength(160)]
public string Title { get; set; }
Update
First of all, You cannot create an index on an nvarchar(MAX) column. You can use full-text indexing, but you cannot create an index on the column to improve query performance.
From the storage prospective there are no difference between nvarchar(max) and nvarchar(N) when N < 4000. Data is stored in row or on the Row-overflow pages when does not fit. When you use nvarchar(max) and stores more than 4000 characters (8000 bytes) SQL Server uses different method to store the data - similar to old TEXT data type - it stores in the LOB pages.
Performance-wise - again, for N<4000 and (max) - there is no difference. Well, technically it affects row size estimation and could introduce some issues - you can read more about it here: Optimal way for vaiable width colums
What can affect the performance of the system is the row size. If you have queries that SCAN table, large row size will lead to more data pages per table -> more io operations -> performance degradation. If this is the case, you can try to do the vertical partitioning and move nvarchar field to the different table.