I'm developing a home automation system using MySQL. I have some arduinos connected through ethernet shields and a Raspberry Pi that manages them using a MQTT server. This server handles the communication between all the devices (each arduino is only connected to the raspberry, which processes the request and sends another request to the same or another arduino).
Also, each arduino is identified by its MAC address.
I have an input (for reading switches and sensors) and an output (turning on and off lamps) system using the arduinos. Each value is stored in the input and output tables.
device
- id : CHAR(12) PK NOT NULL // The MAC Address
- type : VARCHAR(5) NOT NULL // I also manage a door lock system
input
- device : CHAR(12) NOT NULL // FK from device table
- selection : TINYINT NOT NULL // Selects input port
- value : INT // Stores the input value
The output table is very similar. Both tables have other fields not important to my question.
When someone presses a switch a message is sent to the server, the server processes the request, updates the database and sends back other messages to other arduinos according to a set of tables that manages triggers.
I started noticing some delay turning on the lamp and after some code dump I found out that the majority of the time is spent on the database query.
Is it better if instead of using the MAC address as the PK I create another field (INT AUTO_INCREMENT)? What engine is fastest os better for this situation?
PS: The server runs a long running PHP script (it was the best language I knew at the time I started developing this and I was using the web UI as a reference. I know that Python may be better for this case).
No, the difference between CHAR(12) and some size of INT cannot explain a performance problem. Sure, a 1-byte TINYINT UNSIGNED would probably be better, but not worth it for such a 'small' project.
Please provide SHOW CREATE TABLE and the queries, plus EXPLAIN SELECT for any slow queries.
The PRIMARY KEY is accessed via a BTree (see Wikipedia); it is very efficient, regardless of the size of the table, and regardless of the size of the column(s) in the PK.
Here's one reason why I insist on seeing the schema. If, for example, the CHAR is a different CHARACTER SET or different COLLATION on a pair of tables, the JOIN between the tables would not be able to use the index, thereby slowing down the query by orders of magnitude.
From Primary Key Tutorial
Because MySQL works faster with integers, the data type of the primary
key column should be the integer e.g., INT, BIGINT.You can choose a
smaller integer type: TINYINT, SMALLINT, etc. However, you should make
sure that the range of values of the integer type for the primary key
is sufficient for storing all possible rows that the table may have.
Without seeing you full schema for the entire database, it would be hard to give you a bunch of recommendations. But in my experience, I always like to just let my PK be an autoincrement integer. I would then make my MAC address an index (possibly unique) to make joining efficient.
Related
I’m attempting to use a piece of software (Layer2 Cloud Connector) to sync a local SQL table (Sage software) to a remote MySQL database where the data is used reports generated via the company's web app. We are doing this with about 12 tables, and have been doing so for almost two years without any issues.
Background:
I’m using a simple piece of software the uses a SELECT statement to sync records from one table to another using ODBC. In this case from SQL (SQLTable) to MySQL (MySQLTable). To do so, the software requires a SELECT statement for each table, a PK field, and, being ODBC-based, a provider. For SQL I'm using the Actian Zen 4.5, and for MySQL I'm using the MySQL ODBC 5.3.
Here is a screenshot of what the setup screen looks like for each of the tables. I have omitted the other column names that I'm syncing to make the SELECT statement more readable. The other columns are primarily varchar or int types.
Problem
For unrelated reasons, we must now sync a new table. Like most of the other tables, it has a primary key column named rGUID of type binary. When initially setting up the other tables, I tried to sync the primary key as a binary type to a MySQL binary column, but it failed when attempting to verify the SELECT statement on the SQLServer side with the error “Cannot remove this column, because it is a part of the constraint Constraint1 on the table SQLTable”.
Example of what I see for the the GUID/rGUID primary key values stored in the SQLTable via Access, or in MySQL after syncing as string:
¡狻➽䪏蚯㰛蓪
Ҝ諺䖷ᦶ肸邅
ब惈蠷䯧몰吲론�
ॺ䀙㚪䄔麽骧⸍薉
To get around this, I use CAST in the SQLTable SELECT statement to CAST the binary value as a string using: CAST(GUID as nchar(8)) as GUID, and then set up the MySQL column as a VARCHAR(32) using utf8_general_ci collation.
This has worked great for every other table since we originally set this up. But this additional table has considerably more records (about 120,000 versus 5,000-10,000), and though I’m able to sync 10,000 – 15,000 successfully, when I try to sync the entire table I get about 10-12 errors such as:
The metabase record 'd36d2dbe-fa89-4712-be4c-6b212367004b' is marked
to be added. The table 'SQLTable' does not contain a corresponding
row. Changes made to this metabase record will be reset to the
initial state.
I don't understand what is causing the above error or how to work past it.
What I’ve tried so far:
I’ve confirmed the SQLTable has no other unique fields that could be
used as PK in place of the rGUID column
I’ve tried use different type, length and collation settings on the
MySQL table, and have had mixed success, but ultimately still get
errors when attempting to sync the entire table.
I’ve also tried tweaking the CAST settings for the SQL SELECT
statement, but nchar(8) seems to work best for the other tables
I've tried syncing using HASHBYTES('SHA1', GUID) as GUID and syncing
the value of that, but get the below ODBC error
I was thinking perhaps I could convert the SQL GUID to its value, then sync that as a varchar (or a binary), but my attempts at using CONVERT in the SQLTable SELECT statement have failed
Settings I used for all the other tables:
SQL SELECT Statement: SELECT CAST(GUID as nchar(8)) as GUID, OtherColumns FROM SQLTable;
MYSQL SELECT Statement: SELECT GUID, OtherColumns FROM MySQLTable;
Primary Key Field: GUID
Primary Key Field Type: String
MySQL Column Type/Collation: VARCHAR(32), utf8_general_ci
Any help or suggestions at all would be great. I've been troubleshooting this in my spare time for a couple of weeks now, and have no had much success. I'm not particularly familiar with the binary type, and am hoping someone might have an idea on how I might be able to successfully sync this SQL table to MySQL without these errors.
Given the small size of the datasets involved I would select as CHAR(36) from SQL Server and store in a CHAR(36) in MySQL.
If you are able to control the way the data is inserted by Layer2 Cloud Connector then you could set your MySQLTable GUID column as BINARY(16) -
SELECT CAST(GUID AS CHAR(36)) AS GUID, OtherColumns FROM SQLTable;
INSERT INTO MySQLTable (GUID) VALUES (UUID_TO_BIN(GUID)))
SELECT BIN_TO_UUID(GUID) AS GUID, OtherColumns FROM MySQLTable;
I am using redis to store it in userId:refreshToken.
However, this method prevents one user from logging into multiple devices.
So I try to change it to the format of userId_accessToken:refreshToken.
However, this method should be del->insert whenever the access token or refresh token is changed.
So I'm debating between two methods.
Save it in redis as above.
Save in DB as [id, userId, refreshToken, accessToken, expDate].
In mysql, I will create a cron that will delete it after the expDate.
In redis, It will apply ttl when creating it.
What's a better way?
Our server's memory is 3969424.
Database uses rds and mysql.
If there's another good way, that's great too!
I would choose whichever is simpler to implement
and another thought, you can use MyRocks engine to automatic delete old keys
MyRocks-ttl
CREATE TABLE t1 (a INT, b INT, c INT, PRIMARY KEY (a), KEY(b)) ENGINE=ROCKSDB COMMENT "ttl_duration=3600;";
In the above examples, we set ttl_duration to 3600 meaning that we expect rows older than 3600 seconds to be removed from the database.
While building my app, I came across a problem. I have some database tables with information, I want to reuse for different applications. Mainly for authentication and user privileges.
That is why i decided to split my database into two, one for user data (data I will need for other applications) and another for application related data (data I will need only for this).
In some cases, I need to reference a foreign key from one database on another database. I had no problem doing so while databases are in the same connection. I did it like so:
CREATE TABLE `database1`.`table1` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`foreign_key` int(10) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `table1_foreign_key_foreign` (`foreign_key`),
CONSTRAINT `table1_foreign_key_foreign` FOREIGN KEY (`foreign_key`) REFERENCES `database2`.`table2` (`id`)
);
Now here is my problem. I am getting to know Docker and I would like to create a container for each database. If my understanding is correct, each container acts as a different connection.
Is it even possible to reference a foreign key on different database connection?
Is there another way of referencing a foreign key from one Docker container on another?
Any suggestions or comments would be much appreciated.
Having a foreign key cross database boundaries is a bad idea for multiple reasons.
Scaling out: You are tying the databases to the same instance. Moving a database to a new instance becomes much more complicated, and you definitely do not want to end up with a FK constraint running over a linked server. Please, no. Don't.
Disaster Recovery: Your DR process has a significant risk. Are your backups capturing the data at the exact same point in time? If not, there is the risk that the related data will not match after a restore. Even a difference of a few seconds can invalidate the integrity of the relationship.
Different subsystems: Each database requires resources. Some are explicit, others are shared, but there is overhead for each database running in your instance.
Security: Each database has its own security implementation. Different logins and access permissions. If a user in your DATA database needs to lookup a value against the USER database, you'll need to manage permissions in both. Segregating the data by database doesn't solve or enhance your security, it just makes it more complicated. The overhead to manage the security for the sensitive data doesn't change, you'll still need to review and manage users and permissions based on the data (not the location of the data). You should be able to implement exactly the same security controls within the single database.
No, that is not possible. You can not create FK to different instance of DB (or other Docker container in your case).
You may try to make this check on application level.
A ActiveRecord::UnknownPrimaryKey occurred in survey_response#create:
Unknown primary key for table question_responses in model QuestionResponse.
activerecord (3.2.8) lib/active_record/reflection.rb:366:in `primary_key'
Our application has been raising these exceptions and we do not know what is causing them. The exception happens in both production and test environments, but it is not reproducible in either. It seems to have some relation to server load, but even in times of peak loads some of the requests still complete successfully. The app (both production and test environments) is Rails 3.2.8, ruby 1.9.3-p194 using MySQL with the mysql2 gem. Production is Ubuntu and dev/test is OS X. The app is running under Phusion Passenger in production.
Here is a sample stack trace: https://gist.github.com/4068400
Here are the two models in question, the controller and the output of "desc question_responses;": https://gist.github.com/4b3667a6896b60383dc3
It most definitely has a primary key, which is a standard rails 'id' column.
Restarting the app server temporarily stops the exceptions from occurring, otherwise they happen over a period of time 30 minutes - 6 hours in length, starting as suddenly as they stop.
It always occurs on the same controller action, table and model.
Has anyone else run into this exception?
FWIW, I was getting this same intermittent error and after a heck of a lot of head-scratching I found the cause.
We have separate DBs per client, and some how one of the client's DBs had a missing primary key on the users table. This meant that when that client accessed our site, Rails updated it's in-memory schema to that of the database it had connected to, with the missing primary key. Any future requests served by that Passenger app process (or any others that had been 'infected' by this client) which tried to access the users table borked out with the primary key error, regardless of whether that particular database had a primary key.
In the end a fairly self-explanatory error, but difficult to pin down when you've got 500+ databases and only one of them was causing the problem, and it was intermittent.
Got this problem because my workers used shared connection to database. But I was on unicorn.
I know that Passenger reconnects by default, but maybe you have some complicated logic. Connections to number of databases, for example. So you need to reconnect all connections.
This same thing happened to me. I had a composite primary key in one of my table definitions which caused the error. It was further compounded because annotate models did not (but will shortly / does now) support annotation of composite primary keys.
My solution was to make the id column the only primary key and add a constraint (not shown) for the composition. To do this you need to drop auto_increment on id if this is set, drop your composite primary key, then re-add both the primary status and autoincrement.
ALTER TABLE indices MODIFY id INT NOT NULL;
ALTER TABLE indices DROP PRIMARY KEY;
ALTER TABLE indices MODIFY id INT NOT NULL PRIMARY KEY AUTO_INCREMENT;
on postgres database
ALTER TABLE indices ALTER COLUMN id SET DATA TYPE INT;
ALTER TABLE indices ADD PRIMARY KEY (id)
I have a production database with few million rows all using randomly generated GUIDs using a default value of NewID() as the primary key.
I am thinking of using the Sequential NewIDs going forward.
How will SQL Server know while generating the GUIDs in sequence that it did not already create that GUID when it was randomly generating using NEWID()?
It sounds like you're considering using NEWSEQUENTIALID() as the new default value.
Don't worry about the scenario of duplicates. The PK constraint on that column guarantees that a collision isn't going to happen. You aren't guaranteed that a new sequential GUID will be 'higher' anyway:
Creates a GUID that is greater than any GUID previously generated by this function on a specified computer since Windows was started. After restarting Windows, the GUID can start again from a lower range, but is still globally unique.
Some relevant information that may help you at Microsoft Connect : NEWSEQUENTIALID() is Not Sequential