I'm working on a database structure for a big project and I'm wondering what method use for the logs table.
I'm using Laravel 5.* with Eloquent.
This table will contain, User_id, User-Agent, IP, DNS, Lang....
Method A :
LOGS_TABLE :
| Id | user_id | dns | ip | user_agent .... |
|-----|----------|-----------------|----------|---------------------|
| 1 | 5 | dns.google.com | 8.8.8.8 | firefox.*........ |
Method B :
LOGS TABLE :
| Id | dns_id | ip_id | user_agent_id | |
|----|--------|-------|---------------|--|
| 1 | 1 | 1 | 1 | |
IP TABLE:
| Id | value |
|----|---------|
| 1 | 8.8.8.8 |
The problem is, there is 10 fields like this and I'm afraid that all the jointures will slowed the requests.
Why we save all the logs ? :
Our tool provide a complete and high standing IP filtering service. The purpose is to let our customers filter their advertised traffic, and choose who is seing their website exactly.
The main purpose is to choose excactly which page they want to send Facebook on, while advertising on Facebook for example.
All the traffic of the service is due to the visitor visiting ads of our customers.
Technically we just do a 301 redirect to the good page and we log the user data on our database.
Thank's for you help.
What do you want to achieve with the log database? If it is just inserting data, I would go for a denormalized table (option 1).
If you want to also select data on every request, both options will slow your application down. You should take a look at a nosql Database maybe.
partitioning
Another option can be to use partitioning, see: https://laracasts.com/discuss/channels/eloquent/partition-table
In this case you can work with a checksum of the unique data and store the corresponding data in a table with a prefix.
For example: $checksum = 'pre03k3I03fsk34jks354jks35m..';, store in table logs_p or logs_pr.
Do not forget to put an index on the checksum column.
Related
I have an issue about creating database table for user to login in different level access of the system.
I have 3 user roles "ENUM(master_admin, admin_country, admin_city )".
If the master_admin logged in will have access to the whole system,
but if admin_city logged in will have access to his country only with countryID,
and if admin_city logged in will have access to his city data only with cityID
The problem is on creating users table that will save info of different
administrators so they can have access to their related part of the system.
So later when other admin created to cover other parts of the system it will be easily to set them using the same users table.
I tried this:
users table
+--------+-----------+-------------+
| userID | countryID | user_role| |
+--------+-----------+-------------+
| 1 | 23 | master |
+--------+-----------+-------------+
countries table
+-----------+-------------+
| countryID | countryName |
+-----------+-------------+
| 23 | US |
+-----------+-------------+
coutrries table
+-----------+-------------+-------------+
| cityID | countryID | cityName |
+-----------+-------------+-------------+
| 2 | 23 | New York |
+-----------+-------------+-------------+
How can I set my users table for this problem.
Split the users table
users table
+--------+-----------+-----------------+
| userID | username | email |
+--------+-----------+-----------------+
| 1 | adm | master#master |
+--------+-----------+-----------------+
users_Role table
+--------+-----------+------------+-------------+
| userID | countryID | cityID | roleID |
+--------+-----------+------------+-------------+
| 1 | 23 | NULL | 1 |
+--------+-----------+------------+-------------+
| 1 | 22 | NULL | 2 |
+--------+-----------+------------+-------------+
| 1 | NULL | 2 | 3 |
+--------+-----------+------------+-------------+
role table
+-----------+-----------------+
| roleID | roleName |
+-----------+-----------------+
| 1 | Master |
+-----------+-----------------+
| 2 | COUNTRYMASTER |
+-----------+-----------------+
| 3 | CityMaster |
+-----------+-----------------+
OF course you could make a City/Countryid column, as the role defines what type of id is saved.
So can give or remove for every user individual rigst, per country and/or city.
User_role has redundant indormation so another rolentable is necessary for nomalization
You didn't really try all that much, which means your question is reasonable vague. Whenever you design a system with varying levels of access you need to make very sure that access can't be accidentally granted. Even a bug shouldn't make this possible.
A bad way to do this would be to create a cityId in the users table. If there's a number there, say 2, the user has only access to data of New York, if it is 3 only Washington, etc. If the value is zero the user has access to all cities. Choosing zero seems to make sense here, but it is dangerous, because a bug in setting the cityId could set it to zero and give access that shouldn't be granted.
The normal way to do this is to make a separate table which very explicitly grants access. You could call this table permission. Each user can have multiple permissions. You could define a level in it: 'master', 'country' and 'city'. This tells you what kind of access someone has. Other fields could specify exactly which country or city.
Whenever the user accesses a resource you have to check it against the permissions an user has. access is only granted when the answer is positive. You have to write your software in such a way, that forgetting to check the permission, would break the functionality of the software.
I would also log every access, and every change made to the permission table. It might surprise you how often you will have to play detective and find out exactly who did what when.
No matter what you do, this will never be as secure as it can be. There's always a change an user can access something they shouldn't. It could be due to a bug, or a mistake by an administrator. The only way to have real security is to actually put cities and countries in different databases, and let users only exist in the database to which they are allowed to have access. Security and practicality often are enemies.
I'm building an app in Laravel and have a design question regarding my MySQL db.
Currently I have a table which defines the skills for all the default characters in my game. Because the traits are pulled from a pool of skills, and have a variable number, one of my tables looks something like this:
+----+--------+---------+-----------+
| ID | CharID | SkillID | SkillScore|
+----+--------+---------+-----------+
| 1 | 1 | 15 | 200 |
| 2 | 1 | 16 | 205 |
| 3 | 1 | 12 | 193 |
| 4 | 2 | 15 | 180 |
+----+--------+---------+-----------+
Note the variable number of rows for any given CharID. With my Base Characters entered, I'm at just over 300 rows.
My issue is storing User's copies of their (customized)characters. I don't think storing 300+ rows per user makes sense. Should I store this data in a JSON Blob in another table? Should I be looking at a NoSQL solution like Mongo? Appreciate the guidance.
NB: The entire app centers around using the character's different skills. Mostly reporting from them, but users will also be able to update their SkillScore (likely a few times a week).
ps. Should I consider breaking each character out into their own table and tracking user's characters that way? Users won't be able to add/remove the skills from characters, only update them.
TIA.
Your pivot table looks good to me.
I'd consider dropping the ID column (unless you need it), and using a composite primary key:
PRIMARY_KEY(CharID, SkillID)
Primary keys are indexed so you will get efficient lookups.
As for your other suggestions, if you store this in a JSON column, you'll lose the ability to perform joins, and will therefore end up executing more queries.
I'm making a friends list chrome extension for an online browser game I play. One feature is that friends can be able to chat with one another. The database I'm using is called firebase which stores its data in a JSON tree format.
My database has this structure:
Users
|
|_USER1
| |
| |__FRIENDS
|
|_USER2
|
|__FRIENDS
I'm trying to figure out what would be the best way to store chats as part of this database. The option I'm leaning towards right now would just keep a copy of the two users chats in both their section of the Users directory, looking like this:
Users
|
|_USER1
| |
| |__FRIENDS
| |
| |__CHATS
| |
| |__chat w/USER2
|
|_USER2
|
|__FRIENDS
|
|__CHATS
|
|__chat w/USER1
This would make it so on each message send I'd have to update two objects, one in each users section. Note since the tree is formatted as 'key/value' pairs, in the CHAT section of each user the keys would be the other user's name, while the value would be the list of messages sent.
Is this a decent way of organizing such a database? The game is pretty small so I'm not expecting huge traffic.
When it comes to the Firebase Database (and most NoSQL data stores), it's often best to flatten your data.
Users
|
|_USER1
| |
| |__FRIENDS
|
|_USER2
|
|__FRIENDS
UserChats
|
|_USER1
| |
| |__chat w/USER2
|
|_USER2
|
|__chat w/USER1
This way you can look up the user's friend list without having to load their list of chats.
Also look at this answer about a convenient scheme for constructing 1:1 chat room identifiers: Best way to manage Chat channels in Firebase
I need to create a large scale DB Model for a web application that will be multilingual.
One doubt that I've every time I think on how to do it is how I can resolve having multiple translations for a field. A case example.
The table for language levels, that administrators can edit from the backend, can have multiple items like: basic, advance, fluent, mattern... In the near future probably it will be one more type. The admin goes to the backend and add a new level, it will sort it in the right position.. but how I handle all the translations for the final users?
Another problem with internationalization of a database is that probably for user studies can differ from USA to UK to DE... in every country they will have their levels (that probably it will be equivalent to another but finally, different). And what about billing?
How you model this in a big scale?
Here is the way I would design the database:
Visualization by DB Designer Fork
The i18n table only contains a PK, so that any table just has to reference this PK to internationalize a field. The table translation is then in charge of linking this generic ID with the correct list of translations.
locale.id_locale is a VARCHAR(5) to manage both of en and en_US ISO syntaxes.
currency.id_currency is a CHAR(3) to manage the ISO 4217 syntax.
You can find two examples: page and newsletter. Both of these admin-managed entites need to internationalize their fields, respectively title/description and subject/content.
Here is an example query:
select
t_subject.tx_translation as subject,
t_content.tx_translation as content
from newsletter n
-- join for subject
inner join translation t_subject
on t_subject.id_i18n = n.i18n_subject
-- join for content
inner join translation t_content
on t_content.id_i18n = n.i18n_content
inner join locale l
-- condition for subject
on l.id_locale = t_subject.id_locale
-- condition for content
and l.id_locale = t_content.id_locale
-- locale condition
where l.id_locale = 'en_GB'
-- other conditions
and n.id_newsletter = 1
Note that this is a normalized data model. If you have a huge dataset, maybe you could think about denormalizing it to optimize your queries. You can also play with indexes to improve the queries performance (in some DB, foreign keys are automatically indexed, e.g. MySQL/InnoDB).
Some previous StackOverflow questions on this topic:
What are best practices for multi-language database design?
What's the best database structure to keep multilingual data?
Schema for a multilanguage database
How to use multilanguage database schema with ORM?
Some useful external resources:
Creating multilingual websites: Database Design
Multilanguage database design approach
Propel Gets I18n Behavior, And Why It Matters
The best approach often is, for every existing table, create a new table into which text items are moved; the PK of the new table is the PK of the old table together with the language.
In your case:
The table for language levels, that administrators can edit from the backend, can have multiple items like: basic, advance, fluent, mattern... In the near future probably it will be one more type. The admin goes to the backend and add a new level, it will sort it in the right position.. but how I handle all the translations for the final users?
Your existing table probably looks something like this:
+----+-------+---------+
| id | price | type |
+----+-------+---------+
| 1 | 299 | basic |
| 2 | 299 | advance |
| 3 | 399 | fluent |
| 4 | 0 | mattern |
+----+-------+---------+
It then becomes two tables:
+----+-------+ +----+------+-------------+
| id | price | | id | lang | type |
+----+-------+ +----+------+-------------+
| 1 | 299 | | 1 | en | basic |
| 2 | 299 | | 2 | en | advance |
| 3 | 399 | | 3 | en | fluent |
| 4 | 0 | | 4 | en | mattern |
+----+-------+ | 1 | fr | élémentaire |
| 2 | fr | avance |
| 3 | fr | couramment |
: : : :
+----+------+-------------+
Another problem with internationalitzation of a database is that probably for user studies can differ from USA to UK to DE... in every country they will have their levels (that probably it will be equivalent to another but finally, different). And what about billing?
All localisation can occur through a similar approach. Instead of just moving text fields to the new table, you could move any localisable fields - only those which are common to all locales will remain in the original table.
Is there an easy way to backup and restore partial data from a mysql database while maintaining the FK constraints?
Say if I have 2 tables
| CustomerId | CustomerName |
-----------------------------
| 12 | Bon Jovi |
| 13 | Seal |
and
| AddressId| CustomerId | City |
---------------------------------------
| 1 | 12 | London |
| 2 | 13 | Paris |
The backup would only take customer 12 and address 1.
My goal is to take a large database from a production server and replicate it locally, but with partial data.
Due to fairly complicated schema, a custom query is not an option. Also I can't rely on the existence of a main table from which one would get the related rows.
Thanks
You could replicate specific customers manually and by adding an FK constraint on the address table replication will fail to insert/update these records.
For replicating specified tables in the db http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_replicate-do-table .
Use this parameter to silently skip errors on replication http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#sysvar_slave_skip_errors .