Database schema design for handling multiple image resources - mysql

I have some tables in the db which have either one image or several images associated with them. For instance:
# table 1
- id
- name
- created_at
# table 2
- id
- name
- created_at
Now each of these tables has either one or many images. A typical design would be like this:
# table 1
- id
- name
- image_path
- created_at
# table 2
- id
- name
- created_at
# images table
- id
- table_2_id
- image_path
- created_at
However, I happened to have several problems with design as the following:
I have many tables associated with one or more images.
Images are going to be uploaded in the different hosts for storage capacity sake.
There might be more tables added to my database with the same needs.
Dependent on domain changes some tables' image path might change as well.
So now I want to dealing with this problem as a multi-dimensional table for images is the right design choice and is it also going to be future proof?
# images
- id
- table_id
- table_name
- image_path
- created_at
Best regards. Thank you.

You are looking at the problem in the reverse way. You need 1 table with all your images and each table that needs an image will have a link to the images tables

You might consider doing a quick install of a popular open source CMS like WordPress or Drupal and add a few images to see how they accomplish this. Many CMS may have thought through some issues you have not considered.

Related

Preferable database design for job posts

I have two types of job posts -- company job posts and project job posts. The following would be the input fields (on the front-end) for each:
*company*
- company name
- company location
- company description
*project*
- project name
- project location
- project description
- project type
What would be best database design for this -- one table, keeping all fields separate -
`job_post`
- company_name
- company_location
- company_description
- project_name
- project_description
- project_type
- project_location
One table combining the fields -
`job_post`
- name
- location
- description
- project_type
- is_company (i.e., whether a company (True) or project (False)
Or two tables, or something else? Why would that way be preferable over the other ways?
Depending on a lot of factors including the maximum size of this job, I would normalize the data even further than 2 separate tables, perhaps having a company name table, etc... as joining multiple tables results in much faster queries than one long never ending table full of all of your information. What if you want to add more fields to projects but not companies?
In short, I would definitely use multiple tables.
You have identified 3 major objects in your OP; the job, the project, and the company. Each of these objects will have their own attributes, none of which will are associated to the other objects, so I would recommend something akin to (demonstrative only):
job
id
name
company
id
name
project
id
name
link_job_company_project
job_id
company_id
project_id
This type of schema will allow you to add object specific attributes without affecting the other objects, yet combine them all together via the linking table into a 'job post'.
This surely got to do with volume of data stored in the table . In an abstract view one table looks pretty simple . But as Ryan says what if requirements change tomorrow and more columns to be added for one type either company or project. If you are with the prediction that large amount of data top be stored in the table in future even I will prefer 2 tables which will avoid unnecessary filtering of large amount of data which is a performance overhead.

mysql match urls

I am inserting urls in a mysql table. For example i have inserted 8 entries as below:
url
-----------------------------
http://example.com
http://www.example.com
http://example.com/
http://www.example.com/
http://example.com/sports
http://www.example.com/sports
http://example.com/sports/
http://www.example.com/sports/
. Now how can i write a query to match example.com which should return the first 4 entries since they are the same url? Similarly how do i write a query to get the last 4 entries as they are the same? Even if i have huge number of entries the query should be fast is it possible ??
Well, if you have those links in a single table, you could get them like:
SELECT * FROM table WHERE url LIKE '%example.com%'
Is this fast? NO - it will require full table scan.
If I were you, I would model my DB to hold those URLs in 2 tables:
links
id
*base_url* - holds example.com
related_links
id
*link_id* - FK on links
subdomain - holds www.
*relative_url* - holds /sports/
Edit - to answer comment:
Your DB is not normalized right now. You hold multiple records for "the same thing" - you are not benefitting the advantages of DBs. DBs are useful when working with structured data - your query needs to make string operations - an pretty complex ones. So, while it would probably be possible to return the results you need and want with the current form of the DB, it won't be a trivial task, and definitely performance would suck.
My recommendation - modify the DB - at least add the columns subdomain and relative_path to your table and hold this information as separate as possible - to be able to make aggregated queries on it.

MySQL design problem

I am trying to normalize my database but I'm having a headache getting to grips with it. I am developing a CMS where Facebook users can create a page on my site. So far this is what I have
page
----
uid - PK AI
slug - Slug URL
title - Page title
description - Page description
image - Page image
imageThumbnail - Thumbnail of image
owner - The ID of the user that created the page
views - Page views
timestamp - Date page was created
user
----
uid - PK AI
fbid - Facebook ID
(at a later date may add profile options i.e name, website etc)
tags
----
uid - PK AI
tag - String (tag name)
page_tag
--------
pid - Page id (uid from page table)
tid - Tag id (uid from tag table)
page_user
---------
pid - Page id (uid form page table)
uid - User ID (uid from user table)
I've tried to seperate as much information as needed without going over the top. I created a seperate table for tags because I don't want tag names being repeated. If the database holds 100,000+ pages, the repeated tags will add to storage and speed no doubt.
Is there any problems with the design? Or anything I'm doing wrong? I remember learning this at university but I've done very little database design since then.
I'd rather get it right the first time then have the headache later on.
Looks fine to me. How bad can it be with five tables?
You have users, pages, and tags. Users can have many pages; pages can be referred to by many users. A page can have many tags; a tag can be associated with many pages.
Sums it up for me. I wouldn't worry about it.
Your next concern is indexes. You'll want an index for every WHERE clause that you'll use to query.

Good table structure for collaborative article editor

I have an app that will allow an admin to upload an article and share it with many users to edit it. The article is then broken down into sentences which will be stored as individual rows in a MySQL DB. Each user can edit article sentences one at a time. How does one structure the database to allow admins to adjust the article sentences (merge, move, delete, edit, add) and still maintain the integrity of the the user's relationship to the article sentences?
Here is the basic structure:
article_sentences
---------------
-id (auto_increment)
-article_id (FK)
-paragraph_id
-content
user_article_sentences
---------------
-user_id (FK)
-article_id (FK)
-article_sentence_id (FK)
-user_content
One problem I see is the change in article_sentence ID. If the admin moves an article around, the ID will need to change along with the paragraph_id possibly changing if we want the article content to be in the correct order. To solve this, maybe we can add an article_sentence_order column? That way the id will never change but the order of the content is dictated by the article_sentence_order column.
What about merging and deleting? Those will cause some problems as well because fragmentation of the different IDs will start to happen.
Any ideas on a new schema design that will help solve these issues? How does an app like Google Docs deal with this type of issue?
Edit:
To solve the issue of moving different sentences around. We can use a new column called order_id and it can either be a varchar or int. Some tradeoffs: If int, then I will have to increment the subsequent sentences' order_id to be plus 1 of itself. If using a varchar, the order_id can simply be something like '3a' if I want to insert between 3 and 4. Problem with this is that in my application code, using numeric indexes to traverse to the next and previous sentences will be bit of a problem.
Are there other alternatives?
What about holding only full version of content, with a version number for each record so you will have a complete history of the article edited and by whom it was modified?
User:
- id
- name
User_article:
- id
- user_id (fk on user, this is the current editor)
- article_id
- version_number
- article_content (the full content of the article)
Article:
- id
- created_date
- user_id (the creator, or main owner )
- category_id
This way, it is very easy to revert articles content to a previous point in history, to see which user what modifications made, etc

Mysql tables with duplicate attributes

Im designing a simple dealership site that involves several features. Users can sign on and make posts for cars.
Posts can either be for New Cars/ Used Cars:
`new_posts` database has the following fields
- id
- title
- price_from
- price_to
- date_submitted
`used_posts` database has the following
- id
- title
- price_from
- price_to
- year_from
- year_to
- date_submitted
Notice how there is duplication of the attributes. I run into issues like this often and wanted to know what is the best way to deal with this. I have average knowledge of database normalization but i can use any help i can get.
Thanks
There are many options, but two core ideas:
Merge the tables into one and have the fields for the used car be optional.
Extract the fields which make up a vehicle and that's your base table. Then you could create other tables - truck, van, SUV, new, used - that contain fields. You'd then need bridge tables to join them back to your base vehicle table.
The first option is easy to implement, but difficult to scale. The second is more complex, but scales easier.
Personally, I'd merge the two tables. It may not impress any DBAs, but it's practical from an application perspective.
How about something like this?
posts
- id
- title
- price_from
- price_to
- year_from (nullable)
- year_to (nullable)
- date_submitted
- is_used (yes/no)