MySQL - Storing Array and doing a Search based on value - mysql

I have an Activity Feed Module where i am storing Activity that's happening in a Company that has multiple members. The Activities are usually specific to USERs based on their Role. So i want to store the Users whom the activity should be shown to inside the table column so that while fetching the activities i do not have to query the users associated with that activity which would slow down the Data retrieval process.
since MySQL does not have array data type which is the best way to do it?
Right now i have
activities(id, category, action, date)
activity_users(id, activity_id, user_id)
Is This easier to query the activities which are based on the particular type of users that to add a column users and story an array [1,3,5,8] and try to query that from the activities table

Yes, you can use serialize() / unserialize() but you cant fire query on that fields the if need anything specific from that serialized data ( you ll have to fetch all , unserialize and then can perform search ) OR you can save data comma separated ( you can perform any query on that ) OR you can get seprate field for each data

If your code in <?php you can use json_encode()/json_decode(), serialize()/unserialize() and write string as normal. But it will be hard to build right condition for select this kind of data. If u need to find somthing use %LIKE% - not always helps. I suggest you to create separate fields for each kind of data.

Related

Whats the best way to store multiple locations for users and querying them back out?

I have an assignment where I need make a table for users which houses basic information like fname,lname,email...etc but I also need to store information about the users location address,zip,city,state...etc. A user can have multiple location information.
I've been just doing a csv but people are telling its pretty bad to do that so I'm trying to do things the right way.
I was learning about many to many and it seems to do the trick. But the problem is I need to load this data to a table for viewing.
I just don't see anyway that this would work without having to do a query inside the first query.
ie:
while($row=mysqli_fetch_assoc($query)){
$id = $row['id'];
///Get user locations based on id here.
}
From what I understand if we're storing user ID lets say to a table to make the relation to the locations table and a user can have multiple locations would join be useless here?
I need to pull up records 25 at a time so it's not only supposed to pull one. I'm using datatables with the collapse/show so the data needs to be in a separate container
The Approach of storing location separately in other table with user_id column will be good and as the user will be unique let's say he is unique regarding his email, his locations can be fetched.
Other approach, You can store locations(multiple) in form of object and fetch them then you can get individual location by decoding the json

Database design and layout

I want to revisit a project I made to store user data into a database and improve on the way it is stored. I currently went the hard way about it and stored user data in JSON format within a MySQL database field making it difficult to complete CRUD actions. The reason I did this was to keep all the user's data within the user's field. And was reasonably new to this.
I didn't want to store the data mixed with other user's data and as I thought there may be issues with increased users. for example,
If I had 1000 users with 500 rows of data for each, that's 500 000 rows to sort through when reading the data and displaying it on a web page. And is there a risk of mixing the data up or performance issues?
I basically just want a user database that stores the user's id, name, and credentials. Then another database that will store data from a user's activity(run). So at least 5 fields for each event: Time, location, date, duration, etc. And this will be saved for different events(runs) which could end up in the 100's over a period of time.
My question is, Should I design the table as above. Or would it be better to have a table for each user? Or are there other options that I have not explored?
Given the information shared, I believe below mentioned design may be suitable.
Create a Table called User_Details with columns as id (auto increment),user id, name and credentials.
Now create a User_Activity Table with these columns id, user_id, event name, data(json field).
Explanation:
The User Activity table will store the event data for you related to each user through user_id field to user_details table. The data which is a json field will help you to store all the fields for the event. As you are using json field in DB it will allow you to dump any number of fields for the event which may/may not be structured. You can then map this in your middle layer as required.
Also, in case you have finite number of events then you can also create a table called user_event_types and have column id, event name and then in user_activity table you can refer the id instead of event name.

how to cache user likes and favourites using redis?

Suppose I need to do user likes cache since there could be many users hit like button or dislike button in a short period of time.
My idea is that create a mysql table with fields including user_id, liked_object_id, liked_object_type, create_time and (user_id, liked_object_id, liked_object_type) as composite key. When a user hit like button, insert a record to the table, and when a user cancel the like, delete the row from the table. This mysql table is for persisting the data.
And then put the redis in, when user hit the like button, first add a hash object to the redis database, and put the key of that hash object to a set called "cached_keys", so I can grab those hash objects more easily. If the user cancel the like, then the key of the cancelled hash object is moved from "cached_keys" to another set called "deleted_keys" which all the cancelled like's keys is in. Why I do this is I could track which like object is added or deleted when I write all the data in the redis back to mysql table after certain period of time. Except that persisting to mysql, I only do query and write to the redis.
There are other problems. Like query all all likes of a certain type of object and of one user. in order to do this, I need create many redis sets, which namespace is combine of object_type and user_id, and store the all the corresponding object_id in the set, and modify them along with other set like "cached_keys" and "deleted_keys". Another similar case is query all users who liked a certain object.
Another problem is do join query. Since the data is cached in the redis, it's difficult to do paged join querys to the mysql likes table, for example query a list of objects, sort by it's like count. I have to get all the objects from mysql to the memory, then get likes count from redis and sort them, then page the results. And If I want to do some query of one type of objects which one user has liked, I have to get all the keys from the redis and use a "IN".
Doing it like this feels not the right way to do it. Every time the persist happended, its gonna write all the records to the database at once, doesn't feel like it could scale well, as well as the join query.
How could I improve this, or should I do it in a completely different way? Thanks a lot.

sql query not returning non-unique value in table

I have a MySQL database for an investor to track his investments:
the 'deal' table has info about the investments, including different categories for the investment (asset_class).
Another table ('updates') tracks updates on a specific investment (investment name, date, and lots of financial details.)
I want to write a query that allows the user to select all updates from 'updates' under a specific asset_class. However, as mentioned, asset_class is in the investment table. I wrote the following query:
SELECT *
FROM updates
WHERE updates.invest_name IN (SELECT deal.deal_name
FROM deal
WHERE deal.asset_class = '$asset_class'
);
I'm using PHP, so $asset_class is the selected variable of asset_class.
However, the query only returns unique update names, but I want to see ALL updates for the given asset_class, even if several updates are made under one investment name.
Any advice? Thanks!
Your query should do what you intend. In general, though, this type of query would be written using a JOIN. More importantly use parameter placeholders instead of munging query strings:
SELECT u.*
FROM updates u JOIN
deal d
ON u.invest_name = d.deal_name
WHERE d.asset_class = ?;
This can take advantage of indexes on deal(asset_class, deal_name) and updates(invest_name).
The ? represents a parameter that you pass into the query when you run it. The exact syntax depends on how you are making the call.

Joining a table stored within a column of the results

I want to try and keep this as one query and not use PHP, but it's proving to be tough.
I have a table called applications, that stores all the applications and some basic information about them.
Then, I have a table with all the types of applications in it, and that table contains a reference to another table which stores more specific data about the specific type of application in question.
select applications.id as appid, applications.category, type.title as type, type.id as tid, type.valuefld, type.tablename
from applications
left join type on applications.typeid=type.id
left join department on type.deptid=department.id
where not isnull(work_cat)
and work_cat != ''
and applications.deleted=0
and datei between '10-04-14' and '11-04-14'
order by type, work_cat
Now, in the old version, there is another query on every single result. Over hundreds of results... that sucks.
This is the query I'd like to integrate so I can get all the data in one result row. (Old is ASP, I'm re-writing it in PHP)
query = "select sum("&adors.fields("valuefld")&") as cost, description from "&adors.fields("tablename")&" where appid = '"&adors.fields("tablename")&"'"
Prepared statements, I'm aware, are the best solution, but for now they are not an option.
You can't do this with a plain SQL query - you need to have a defined set of tables that your query is based on. The fact that your current implementation queries from whatever table is named by tablename from the first result-set means that to get this all in one query, you will have to restructure your data. You have to know what tables you're querying from rather than having it dynamic.
If the reason for these different tables is the different information stored in each requiring different record (column) structures, you might want to look into Key/Value pair storage in a large table. Once you combine the dynamically named ones into a single location you can integrate your two queries together.