I want a column to have only two values. For example I want to make the column active can only contain the values "Y" and "N" I don`t want to use boolean data type.
I`m looking for a way similar to the Look Up Wizard of the MS Access how can this be done?
Use a non-nullable bit
What if you want J and N for German? Or other languages? This is client formatting
Ditto "true", "false"
What about y/Y/n/N? Unicode Ys and Ns?
You'd need a check constraint to restrict to Y or N: why when you have this anyway with bit?
Finally, SQL Server has no boolean type as such: client code will interpret bit as boolean though
Edit, after comment on question.
If you need to add more values, then I suggest a lookup table and foreign key. This means you can support new values without changing code (CHECK constraint) and/or datatypes.
What you're looking for are Check Constraints
e.g.
ALTER TABLE dbo.Vendors ADD CONSTRAINT CK_Vendor_CreditRating
CHECK (CreditRating >= 1 AND CreditRating <= 5)
Or for you
ALTER TABLE dbo.MyTableName ADD CONSTRAINT CK_MtTable_FieldName_YN
CHECK (FieldName = 'Y' OR FieldName = 'N')
You could use a varchar(1) or nvarchar(1). Put a constraint on the column in which you state that only Y and N are possible as input to keep data integrity.
Grz, Kris.
Related
I want to fix or validate keys for JSON object in PostgreSQL(v10.7).
For instance, I have a JSON object called service_config which looks like;
{"con_type": "Foo", "capacity": 2, "capacity_unit": "gbps"}
And I have table:
id(serial) service_name(char) service_type(char) service_config(JSON)
-----------+---------------------+---------------------+---------------------
1 | com | ethernet | {"con_type": "ddc", "capacity": 2, "capacity_unit": "gbps"}
2 | res | gpon | {"con_type": "ftth", "capacity": 1, "capacity_unit": "gbps"}
Now, whenever I insert row into the table, I want to make sure or validate that the service_config column contains all the keys that are mentioned above, no more, no less. However, there could be null value for the keys.
Is this possible in Postgres and/or is there any better way to do this?
Possible solutions:
1- Validate service_config at the backend API and make sure all the keys are there. (currently in place and working)
2- Write a function in Postgres to validate service_config on insert and update. (doable but tedious)
Limitation: I cannot add any extension in Postgres.
I want to make sure or validate that the service_config column contains all the keys that are mentioned above, no more, no less. However, there could be null value for the keys.
Turn them into columns.
JSON is nice when you need to just dump some data into a row and you're not sure what it's going to be. Now that you are sure what it's going to be, and you want more constraints, that's what columns do best.
alter table whatever add column con_type text;
alter table whatever add column capacity integer;
alter table whatever add column capacity_unit text;
update whatever set
con_type = data->'con_type',
capacity = data->'capacity',
capacity_unit = data->'capacity_unit';
alter table whatever drop column data
The columns will always be there. Their values may be null. You can add per-column check constraints and indexes. No additional validations are necessary.
If you still need json, use jsonb_build_object.
select
jsonb_build_object(
'con_type', con_type,
'capacity', capacity,
'capacity_unit', capacity_unit
)
from whatever;
And, if you need it for compatibility purposes, you can make this a view.
create view whatever_as_json
select
*,
jsonb_build_object(
'con_type', con_type,
'capacity', capacity,
'capacity_unit', capacity_unit
) as data
from whatever;
Note that I use text, not char, because there is no advantage to char in Postgres. See the tip in 8.3. Character Types
There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead.
If there is default set to 'Always', using check I think, only other possible value would be let's say 'Honorary'? Simplest way. If this was JavaScript I'd use if (default){value='Always'}else{value='Honorary'} How do I do that in MySql? Also do you know how to set min value of the colon int to be let's say 0. Like from 0 to 999?
This sounds like a check constraint, which is supported in the most recent versions of MySQL:
alter table t add constraint chk_t_value check (value in ('Always', 'Honorary'));
Another option is to use an enum that only takes on two values.
I have a MySQL database that I wish to convert into Postgres. One issue I encountered is to convert tinyint(1) (synonym to boolean) columns into "true" boolean and retain the default value of the MySQL column which can be either 0 or 1 but in Postgres the respective values are true or false. The SQL I'm trying:
ALTER TABLE "payments" ALTER COLUMN "is_automatic" TYPE boolean USING CAST("is_automatic" as boolean);
The error message:
ERROR: default for column "is_automatic" cannot be cast automatically to type boolean
I would think it would be possible to cast this value somehow. Is this possible to do or do I have to manually add this to the migration script?
Edit: I realise I might have explained the issue a bit vaguely, sorry about that. I am using this script (https://github.com/lanyrd/mysql-postgresql-converter) to convert the MySQL database. The values are converted into "true" postgres boolean using this script just fine but the columns themselves that where originally booleans in MySQL (represented by tinyint(1)) gets their default value dropped. This happens on row 157 in the script and removing the "DROP DEFAULT" part of the command generates the error above, because it can't be casted (I guess). My question is better asked this way: In the process of converting a tinyint(1) column, can the default value be "remembered" and later applied again with a "SET DEFAULT" command?
The postgresql ALTER TABLE reference page has an example exactly covering this scenario:
.. when the column has a default expression that won't automatically
cast to the new data type:
ALTER TABLE foo
ALTER COLUMN foo_timestamp DROP DEFAULT,
ALTER COLUMN foo_timestamp TYPE timestamp with time zone
USING
timestamp with time zone 'epoch' + foo_timestamp * interval '1 second',
ALTER COLUMN foo_timestamp SET DEFAULT now();
So, you need to drop the old default, alter the type, then add the new default.
Note that the USING expression does not have any bearing on the default. It is purely used to convert existing values in the table. But in any case, there is no direct cast between integer and boolean, so you need a slightly more advanced USING expression.
Your statement might look like this:
ALTER TABLE payments
ALTER COLUMN is_automatic DROP DEFAULT,
ALTER COLUMN is_automatic TYPE BOOLEAN
USING is_automatic!=0,
ALTER COLUMN is_automatic SET DEFAULT TRUE;
The using expression might need a little tweaking, I am assuming here that your existing data has a value of 0 for false and something else for true.
I've been looking for ages to try and find an answer to this problem but haven't found an example of this being done to help me work out how to implement it.
I'm creating a table and I want a column in MS SQL server which is CHAR(5) datatype but I want it to only allow first 2 CHARS of it to have a letter and following 3 CHARS to be numbers only. I've seen things like "CHECK (UnitCode NOT LIKE '%[^A-Z0-9]%') " which limits to only letters and numbers, but doesn't force to only allow which chars can or can't be numbers or letters.
IF someone can point me in the right direction I'd really appreciate it. Thanks.
WHERE column LIKE '[A-Z][A-Z][0-9][0-9][0-9]'
Assuming that a 'letter' really is A-Z and nothing else (e.g. accented or non-European characters).
you should specify a constraint check when creating the table:
CREATE TABLE table_with_data ( data_to_check CHAR(5),
CONSTRAINT data_format_chk CHECK ( (SUBSTRING(data_to_check FROM 1 FOR 2) LIKE
'[a-zA-Z]') AND (CAST(SUBSTRING(data_to_check FROM 3 FOR 3) AS
SIGNED) BETWEEN 0 AND 999) ) );
Sometimes I am not sure whether using enum or char(1) in MysQL. For instance, I store statuses of posts. Normally, I only need Active or Passive values in status field. I have two options:
// CHAR
status char(1);
// ENUM (but too limited)
status enum('A', 'P');
What about if I want to add one more status type (ie. Hidden) in the future? If I have small data, it won't be an issue. But if i have too large data, so editing ENUM type will be problem, i think.
So what's your advice if we also think about MySQL performance? Which way I would go?
Neither. You'd typically use tinyint with a lookup table
char(1) will be slightly slower because comparing uses collation
confusion: As you extend to more than A and P
using a letter limits you as you add more types. See last point.
every system I've seen has more then one client eg reporting. A and P have to resolved to Active and Passive for in each client code
extendibility: add one more type ("S" for "Suspended") you can one row to a lookup table or change a lot of code and constraints. And your client code too
maintenance: logic is in 3 places: database constraint, database code and client code. With a lookup and foreign key, it can be in one place
Enum is not portable
On the plus side of using a single letter or Enum
Note: there is a related DBA.SE MySQL question about Enums. The recommendation is to use a lookup table there too.
You can use
status enum('Active', 'Passive');
It will not save a string in the row, it will only save a number that is reference to enum member in the table structure, so the size is the same but its more readable than char(1) or your enum.
Editing enum is not a problem no matter how big your data is
I would use a binary SET field for this, but without labelling the options specifically within the database. All the "labelling" would be done within your code, but it does provide some very flexible options.
For example, you could create a SET containing eight "options" such as;
`column_name` SET('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h') NOT NULL DEFAULT ''
Within your application, you can then define the 'a' as denoting "Active" or "Passive", the 'b' can denote "Hidden", and the rest can be left undefined until you need them.
You can then use all sorts of useful binary operations on the field for instance you could extract all those which are "Hidden" by running;
WHERE `column_name` & 'b'
And all those which are "Active" AND "Hidden" by running;
WHERE `column_name` & 'a' AND `column_name` & 'b'
You can even use the LIKE and FIND_IN_SET operators to do even more useful queries.
Read the MySQL documentation for further information;
http://dev.mysql.com/doc/refman/5.1/en/set.html
Hope it helps!
Dave
Hard to tell without knowing the semantics of your statuses, but to me "hidden" doesn't seem like an alternative to "active" or "passive", i.e. you might want to have both "active hidden" and "passive hidden"; this would degenerate with each new non-exclusive "status", it would be better to implement your schema with boolean flags: one for the active/passive distinction, and one for the hidden/visible distinction. Queries become more readable when your condition is "WHERE NOT hidden" or "WHERE active", instead of "WHERE status = 'A'".