mysql update and insert statements based on another table - mysql

I need some help with the mysql statements for inserting and updating rows in a new table based on the contents of another table. I am going to use this in automated perl code, but the mysql statements themselves are what I am having trouble with.
My first table named PROFILE looks something like this:
+----------+---------------------------+
| ID | NAME |
+----------+---------------------------+
| 0 | Default profile |
| 04731470 | Development profile |
| 87645420 | Core Base |
| a41401a0 | Core Test |
| ba0e3000 | Development profile child |
| e37fe780 | Test2 |
+----------+---------------------------+
The second called DEPLOYMENT has these columns (and no rows yet):
+------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| PROF_ID | char(36) | NO | PRI | NULL | |
| NAME | varchar(60) | NO | | NULL | |
| ID | tinyint(4) | NO | MUL | NULL | |
+------------+-------------+------+-----+---------+-------+
ID.PROFILE is the foreign key for PROF_ID.DEPLOYMENT and I want all of the values for ID.PROFILE to go in PROF_ID.DEPLOYMENT. Then I want the NAME.DEPLOYMENT and ID.DEPLOYMENT fields to be set based on the words found in the NAME.PROFILE field.
The following shows what I want to do as far as the insert statements goes, but these failed due to "ERROR 1242 (21000): Subquery returns more than 1 row":
INSERT INTO DEPLOYMENT(PROF_ID,NAME,ID) VALUES((select ID from PROFILE where NAME like '%core%'),'Core','2');
INSERT INTO DEPLOYMENT(PROF_ID,NAME,ID) VALUES((select ID from PROFILE where NAME like '%development%'),'Dev','3');
INSERT INTO DEPLOYMENT(PROF_ID,NAME,ID) VALUES((select ID from PROFILE where NAME not like '%development%' and not like '%core%'),'Default','1');
I'm not sure where to start on the update part of this but the ID.DEPLOYMENT and NAME.DEPLOYMENT fields should change as above if the text in the NAME.PROFILE fields changes with any of the words above.
This is the resulting DEVELOPMENT table I am looking for.
+----------+---------------+----+
| PROF_ID | NAME | ID |
+----------+---------------+----+
| 0 | Default | 1 |
| 04731470 | Dev | 3 |
| 87645420 | Core | 2 |
| a41401a0 | Core | 2 |
| ba0e3000 | Dev | 3 |
| e37fe780 | Default | 1 |
+----------+---------------+----+
Then I want statements to update if any of the NAME.PROFILE information changes.
Sorry if this is confusing, I wasn't sure how to explain and I am still learning mysql. Any help is appreciated.

Just get rid of the values keyword, basically:
INSERT INTO DEPLOYMENT(PROF_ID,NAME,ID)
select ID, 'Core','2'
from PROFILE
where NAME like '%core%';
INSERT INTO DEPLOYMENT(PROF_ID,NAME,ID)
select ID, 'Dev', '3'
from PROFILE
where NAME like '%development%';
INSERT INTO DEPLOYMENT(PROF_ID,NAME,ID)
select ID, 'Default', '1'
from PROFILE
where NAME not like '%development%' and not like '%core%';
By the way, you could combine these into one statement, using conditional expressions:
INSERT INTO DEPLOYMENT(PROF_ID,NAME,ID)
select ID,
(case when NAME like '%core%' then 'Core'
when NAME like '%development%' then 'Dev'
else 'Default'
end)
(case when NAME like '%core%' then '2'
when NAME like '%development%' then '3'
else '1'
end)
from PROFILE;

Related

merger one row with null values to not null values of another row mysql

I want to merge two rows into one.The below format is in the database.
+----+---------+-----------------------+-------------------------+
| id | appid | photo | signature |
+====+=========+=======================+=========================+
| 1 | 10001 | 10001.photograph.jpg | NULL |
| 2 | 10001 | NULL | 10001.signature.jpg |
+----+---------+-----------------------+-------------------------+
I want a mysql query so that i can fetch data like below,
+--------+------------------------+-------------------------+
| appid | photo | signature |
+========+========================+=========================+
|10001 | 10001.photograph.jpg | 10001.signature.jpg |
+--------+------------------------+-------------------------+
Kindly suggest...
You can also use max function
select appid,
max(photo) photo,
max(signature) signature
from test
group by appid
Demo
This should do this:
select t1.appid,t1.photo,t2.signature from mytable t1 join mytable t2 on t1.appid=t2.appid where t1.id=1 and t2.id=2

MySQL - Select everything from one table, but only first matching value in second table

I'm feeling a little rusty with creating queries in MySQL. I thought I could solve this, but I'm having no luck and searching around doesn't result in anything similar...
Basically, I have two tables. I want to select everything from one table and the matching row from the second table. However, I only want to have the first result from the second table. I hope that makes sense.
The rows in the daily_entries table are unique. There will be one row for each day, but maybe not everyday. The second table notes contains many rows, each of which are associated with ONE row from daily_entries.
Below are examples of my tables;
Table One
mysql> desc daily_entries;
+----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+----------------+
| eid | int(11) | NO | PRI | NULL | auto_increment |
| date | date | NO | | NULL | |
| location | varchar(100) | NO | | NULL | |
+----------+--------------+------+-----+---------+----------------+
Table Two
mysql> desc notes;
+---------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+---------+------+-----+---------+----------------+
| task_id | int(11) | NO | PRI | NULL | auto_increment |
| eid | int(11) | NO | MUL | NULL | |
| notes | text | YES | | NULL | |
+---------+---------+------+-----+---------+----------------+
What I need to do, is select all entries from notes, with only one result from daily_entries.
Below is an example of how I want it to look:
+----------------------------------------------+---------+------------+----------+-----+
| notes | task_id | date | location | eid |
+----------------------------------------------+---------+------------+----------+-----+
| Another note | 3 | 2014-01-02 | Home | 2 |
| Enter a note. | 1 | 2014-01-01 | Away | 1 |
| This is a test note. To see what happens. | 2 | | Away | 1 |
| Testing another note | 4 | | Away | 1 |
+----------------------------------------------+---------+------------+----------+-----+
4 rows in set (0.00 sec)
Below is the query that I currently have:
SELECT notes.notes, notes.task_id, daily_entries.date, daily_entries.location, daily_entries.eid
FROM daily_entries
LEFT JOIN notes ON daily_entries.eid=notes.eid
ORDER BY daily_entries.date DESC
Below is an example of how it looks with my query:
+----------------------------------------------+---------+------------+----------+-----+
| notes | task_id | date | location | eid |
+----------------------------------------------+---------+------------+----------+-----+
| Another note | 3 | 2014-01-02 | Home | 2 |
| Enter a note. | 1 | 2014-01-01 | Away | 1 |
| This is a test note. To see what happens. | 2 | 2014-01-01 | Away | 1 |
| Testing another note | 4 | 2014-01-01 | Away | 1 |
+----------------------------------------------+---------+------------+----------+-----+
4 rows in set (0.00 sec)
At first I thought I could simply GROUP BY daily_entries.date, however that returned only the first row of each matching set. Can this even be done? I would greatly appreciate any help someone can offer. Using Limit at the end of my query obviously limited it to the value that I specified, but applied it to everything which was to be expected.
Basically, there's nothing wrong with your query. I believe it is exactly what you need because it is returning the data you want. You can not look at as if it is duplicating your daily_entries you should be looking at it as if it is return all notes with its associated daily_entry.
Of course, you can achieve what you described in your question (there's an answer already that solve this issue) but think twice before you do it because such nested queries will only add a lot of noticeable performance overhead to your database server.
I'd recommend to keep your query as simple as possible with one single LEFT JOIN (which is all you need) and then let consuming applications manipulate the data and present it the way they need to.
Use mysql's non-standard group by functionality:
SELECT n.notes, n.task_id, de.date, de.location, de.eid
FROM notes n
LEFT JOIN (select * from
(select * from daily_entries ORDER BY date DESC) x
group by eid) de ON de.eid = n.eid
You need to do these queries with explicit filtering for the last row. This example uses a join to do this:
SELECT n.notes, n.task_id, de.date, de.location, de.eid
FROM daily_entries de LEFT JOIN
notes n
ON de.eid = n.eid LEFT JOIN
(select n.eid, min(task_id) as min_task_id
from notes n
group by n.eid
) nmin
on n.task_id = nmin.min_task_id
ORDER BY de.date DESC;

Defining a webservice for usage analytics (dekstop application)

Current situation
I have a desktop application (C++ Win32), and I wish to track users' usage analytics anonymously (actions, clicks, usage time, etc.)
The tracking is done via designated web services for specific actions (install, uninstall, click) and everything is written by my team and stored on our DB.
The need
Now we're adding more usage types and events with a variety of data, so we need define the services.
Instead of having tons of different web services for each action, I want to have a single generic service for all usage types, that is capable of receiving different data types.
For example:
"button_A_click" event, has data with 1 field: {window_name (string)}
"show_notification" event, has data with 3 fields: {source_id (int), user_action (int), index (int)}
Question
I'm looking for an elegant & convenient way to store this sort of diverse data, so later I could query it easily.
The alternatives I can think of:
Storing the different data for each usage type as one field of JSON/XML object, but it would be extremely hard to pull data and write queries for those fields
Having extra N data fields for each record, but it seems very wasteful.
Any ideas for this sort of model? Maybe something like google analytics? please Advise...
Technical: The DB is MySQL running under phpMyAdmin.
Disclaimer:
There is a similar post, which brought to my attention services like DeskMetrics and Tracker bird, or try to embed google analytics to C++ native application, but I'd rather the service to by my own, and better understand how to design this sort of model.
Thanks!
This seems like a database normalization problem.
I am also going to assume that you also have a table named events where all events will be stored.
Additionally, I am going to assume you have to the following data attributes (for simplicity's sake): window_name, source_id, user_action, index
To achieve normalization, we will need the following tables:
events
data_attributes
attribute_types
This is how each of the tables should be structured:
mysql> describe events;
+------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+------------------+------+-----+---------+----------------+
| id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| event_type | varchar(255) | YES | | NULL | |
+------------+------------------+------+-----+---------+----------------+
mysql> describe data_attributes;
+-----------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+------------------+------+-----+---------+----------------+
| id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| event_id | int(11) | YES | | NULL | |
| attribute_type | int(11) | YES | | NULL | |
| attribute_name | varchar(255) | YES | | NULL | |
| attribute_value | int(11) | YES | | NULL | |
+-----------------+------------------+------+-----+---------+----------------+
mysql> describe attribute_types;
+-------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+------------------+------+-----+---------+----------------+
| id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| type | varchar(255) | YES | | NULL | |
+-------+------------------+------+-----+---------+----------------+
The idea is that you will have to populate attribute_types with all possible types you can have. Then, for each new event, you will add an entry in the events table and corresponding entries in the data_attributes table to map that event to one or more attribute types with the appropriate values.
Example:
"button_A_click" event, has data with 1 field: {window_name "Dummy Window Name"}
"show_notification" event, has data with 3 fields: {source_id: 99, user_action: 44, index: 78}
would be represented as:
mysql> select * from attribute_types;
+----+-------------+
| id | type |
+----+-------------+
| 1 | window_name |
| 2 | source_id |
| 3 | user_action |
| 4 | index |
+----+-------------+
mysql> select * from events;
+----+-------------------+
| id | event_type |
+----+-------------------+
| 1 | button_A_click |
| 2 | show_notification |
+----+-------------------+
mysql> select * from data_attributes;
+----+----------+----------------+-------------------+-----------------+
| id | event_id | attribute_type | attribute_name | attribute_value |
+----+----------+----------------+-------------------+-----------------+
| 1 | 1 | 1 | Dummy Window Name | NULL |
| 2 | 2 | 2 | NULL | 99 |
| 3 | 2 | 3 | NULL | 44 |
| 4 | 2 | 4 | NULL | 78 |
+----+----------+----------------+-------------------+-----------------+
To write a query for this data, you can use the COALESCE function in MySQL to get the value for you without having to check which of the columns is NULL.
Here's a quick example I hacked up:
SELECT events.event_type as `event_type`,
attribute_types.type as `attribute_type`,
COALESCE(data_attributes.attribute_name, data_attributes.attribute_value) as `value`
FROM data_attributes,
events,
attribute_types
WHERE data_attributes.event_id = events.id
AND data_attributes.attribute_type = attribute_types.id
Which yields the following output:
+-------------------+----------------+-------------------+
| event_type | attribute_type | value |
+-------------------+----------------+-------------------+
| button_A_click | window_name | Dummy Window Name |
| show_notification | source_id | 99 |
| show_notification | user_action | 44 |
| show_notification | index | 78 |
+-------------------+----------------+-------------------+
EDIT: Bugger! I read C#, but I see you are using C++. Sorry about that. I leave the answer as-is as its principle could still be useful. Please regard the examples as pseudo-code.
You can define a custom class/structure that you use with an array. Then serialize this data and send to the WebService. For example:
[Serializable()]
public class ActionDefinition {
public string ID;
public ActionType Action; // define an Enum with possible actions
public List[] Fields; //Or a list of 'some class' if you need more complex fields
}
List AnalyticsCollection = new List(Of, Actiondefinition);
// ...
SendToWS(Serialize(AnalyticsCollection));
Now you can dynamically add as many events as you want with the needed flexibility.
on server side you can simply parse the data:
List[of, ActionDefinition] AnalyticsCollection = Deserialize(GetWS());
foreach (ActionDefinition ad in AnalyticsCollection) {
switch (ad.Action) {
//.. check for each action type
}
}
I would suggest adding security mechanisms such as checksum. I imagine the de/serializer would be pretty custom in C++ so perhaps as simple Base64 encoding can do the trick, and it can be transported as ascii text.
You could make a table for each event in wich you declare what param means what. Then you have a main table in wich you only input the events name and param1 etc. An admin tool would be very easy, you go through all events, and describe them using the table where each event is declared. E.g. for your event button_A_click you insert into the description table:
Name Param1
button_A_Click WindowTitle
So you can group your events or select only one event ..
This is how I would solve it.

Query on two tables for one report (Advanced)

I'm having some trouble with an advanced SQL query, and it's been a long time since I've worked with SQL databases. We use MySQL.
Background:
We will be working with two tables:
"Transactions Table"
table: expire_history
+---------------+-----------------------------+------+-----+-------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-----------------------------+------+-----+-------------------+-------+
| m_id | int(11) | NO | PRI | 0 | |
| m_a_ordinal | int(11) | NO | PRI | 0 | |
| a_expired_date| datetime | NO | PRI | | |
| a_state | enum('EXPIRED','UNEXPIRED') | YES | | NULL | |
| t_note | text | YES | | NULL | |
| t_updated_by | varchar(40) | NO | | | |
| t_last_update | timestamp | NO | | CURRENT_TIMESTAMP | |
+---------------+-----------------------------+------+-----+-------------------+-------+
"Information Table"
table: information
+---------------------+---------------+------+-----+---------------------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------------+---------------+------+-----+---------------------+-------+
| m_id | int(11) | NO | PRI | 0 | |
| m_a_ordinal | int(11) | NO | PRI | 0 | |
| a_type | varchar(15) | YES | MUL | NULL | |
| a_class | varchar(15) | YES | MUL | NULL | |
| a_state | varchar(15) | YES | MUL | NULL | |
| a_publish_date | datetime | YES | | NULL | |
| a_expire_date | date | YES | | NULL | |
| a_updated_by | varchar(20) | NO | | | |
| a_last_update | timestamp | NO | | CURRENT_TIMESTAMP | |
+---------------------+---------------+------+-----+---------------------+-------+
We have a set of fields in one table that describe the record. Each record is comprised of a m_id (the person) and an ordinal (a person can have multiple records). So for instance, my m_id could be 1, and i could have multiple ordinals, (1, 2, 3, 4, etc), each with their own individual set of data. The m_id and the m_a_ordinal comprise a composite key in the "information" table, and the m_id, m_a_ordinal, and a_expired_date fields in the "transactions" table comprises a composite key as well.
Essentially when we expire a record, the a_state field in the information table is updated to expired. At the same time, a record is created in the transactions table with the m_id, m_a_ordinal, and a_expired_date. We've found in the past that people get impatient and can click a button twice, so through some previous help I've managed to narrow down the most recent transaction for each expired record using the following query:
SELECT e1.m_id, e1.m_a_ordinal, e1.a_expired_date, e1.t_note, e1.t_updated_by
FROM expire_history e1
INNER JOIN (SELECT m_id, m_a_ordinal, MAX(a_expired_date) AS a_expired_date
FROM expire_history GROUP BY m_id, m_a_ordinal) e2
ON (e2.m_id = e1.m_id AND e2.m_a_ordinal = e1.m_a_ordinal AND e2.a_expired_date = e1.a_expired_date)
WHERE e2.a_expired_date > '2008-05-15 00:00:00' ORDER BY a_date_expired;
Seems simple enough, right?
Let's add some complexity. Each record in the "information" table has a "natural expiration date" as well. The original developer of our software, however, didn't code it to change the state of the record to "expired" once it's reached it's natural expiration date. It also does not write a transaction to the transaction table once it's expired (which I understand because this is only to keep records of ones that were expired by a person, as opposed to automagically). Also, when a record is expired manually, the original expiration date does not change. This is why this is so complicated :P~~.
Essentially I need to build a report that shows all aspects of expiration, whether it was expired manually, or naturally.
This report should take the data from the query above, and combines it with another query on the "information table" that says if a_expire_date <= CURDATE show record, except if record exisits in (query above from expire_history), then show record from (query on expire_history).
a rough structure of the raw logic is as follows:
for x in record_total
if (m_id m_a_ordinal) exists in expire_history
display m_id, m_a_ordinal, a_expired_date, a_state)
else if (m_id_a_ordinal) exists in information AND a_expire_date <= CURDATE
display (m_id, m_a_ordinal, a_expire_date, a_state)
end if
x++
I hope that this is concise enough.
Thanks for any help you can provide!
SELECT i.m_id, I.m_a_ordinal,
coalesce(e1.a_expired_date, I.A_Expire_Date) as Expire_DT,
coalesce(e1.t_note,'insert related item column'),
coalesce(e1.t_updated_by, I.A_Updated_by) as Updated_By
FROM Information I
LEFT JOIN expire_history e1
ON E1.M_ID = I.M_ID
AND I.m_a_ordinal=e1.M_a_ordinal
INNER JOIN
(SELECT m_id, m_a_ordinal, MAX(a_expired_date) AS a_expired_date
FROM expire_history GROUP BY m_id, m_a_ordinal) e2
ON (e2.m_id = e1.m_id
AND e2.m_a_ordinal = e1.m_a_ordinal
AND e2.a_expired_date = e1.a_expired_date)
WHERE coalesce(e2.a_expired_date,i.A_Expire_Date) > '2008-05-15 00:00:00'
ORDER BY a_date_expired;
Syntax may be off a bit don't ahve time to test; but you can get the gist of it from this I hope:
Again what coalesce does is simply return the first NON-null value in a series of values. If you're only dealing with two NULLIF may work as well.

Fast complex query to select bookings

I'm trying to write a query to get a courses information and the number of bookings and attendees. Each course can have many bookings and each booking can have many attendees.
We already have a working report, but it uses multiple queries to get the required information. One to get the courses, one to get the bookings, and one to get the number of attendees. This is very slow because of the size that the database has grown to.
There are a number of extra conditions for the reports:
Bookings must be made more than 5
minutes ago, or have been confirmed
The booking must not be canceled
The course must not be marked as deleted
The courses venue and location must be LIKE a search string
Courses with no bookings must appear in the results
This is the table structure: (I've omitted the unneeded information. All fields are not null and have no default)
mysql> DESCRIBE first_aid_courses;
+------------------+--------------+-----+----------------+
| Field | Type | Key | Extra |
+------------------+--------------+-----+----------------+
| id | int(11) | PRI | auto_increment |
| course_date | date | | |
| region_id | int(11) | | |
| location | varchar(255) | | |
| venue | varchar(255) | | |
| number_of_spaces | int(11) | | |
| deleted | tinyint(1) | | |
+------------------+--------------+-----+----------------+
mysql> DESCRIBE first_aid_bookings;
+-----------------------+--------------+-----+----------------+
| Field | Type | Key | Extra |
+-----------------------+--------------+-----+----------------+
| id | int(11) | PRI | auto_increment |
| first_aid_course_id | int(11) | | |
| placed | datetime | | |
| confirmed | tinyint(1) | | |
| cancelled | tinyint(1) | | |
+-----------------------+--------------+-----+----------------+
mysql> DESCRIBE first_aid_attendees;
+----------------------+--------------+-----+----------------+
| Field | Type | Key | Extra |
+----------------------+--------------+-----+----------------+
| id | int(11) | PRI | auto_increment |
| first_aid_booking_id | int(11) | | |
+----------------------+--------------+-----+----------------+
mysql> DESCRIBE regions;
+----------+--------------+-----+----------------+
| Field | Type | Key | Extra |
+----------+--------------+-----+----------------+
| id | int(11) | PRI | auto_increment |
| name | varchar(255) | | |
+----------+--------------+-----+----------------+
I need to select the following:
Course ID: first_aid_courses.id
Date: first_aid_courses.course_date
Region regions.name
Location: first_aid_courses.location
Bookings: COUNT(first_aid_bookings)
Attendees: COUNT(first_aid_attendees)
Spaces Remaining: COUNT(first_aid_bookings) - COUNT(first_aid_attendees)
This is what I have so far:
SELECT `first_aid_courses`.*,
COUNT(`first_aid_bookings`.`id`) AS `bookings`,
COUNT(`first_aid_attendees`.`id`) AS `attendees`
FROM `first_aid_courses`
LEFT JOIN `first_aid_bookings`
ON `first_aid_courses`.`id` =
`first_aid_bookings`.`first_aid_course_id`
LEFT JOIN `first_aid_attendees`
ON `first_aid_bookings`.`id` =
`first_aid_attendees`.`first_aid_booking_id`
WHERE ( `first_aid_courses`.`location` LIKE '%$search_string%'
OR `first_aid_courses`.`venue` LIKE '%$search_string%' )
AND `first_aid_courses`.`deleted` = 0
AND ( `first_aid_bookings`.`placed` > '$five_minutes_ago'
AND `first_aid_bookings`.`cancelled` = 0
OR `first_aid_bookings`.`confirmed` = 1 )
GROUP BY `first_aid_courses`.`id`
ORDER BY `course_date` DESC
Its not quite working, can any one help me with writing the correct query? Also there are 1000s of rows in this database, so any help on making it fast is appreciated (like which fields to index).
Ok, Ive answered my own question. Sometimes it helps to ask a question for you to figure out the answer.
SELECT `first_aid_courses`.*,
`regions`.`name` AS `region_name`,
COUNT(DISTINCT `first_aid_bookings`.`id`) AS `bookings`,
COUNT(`first_aid_attendees`.`id`) AS `attendees`
FROM `first_aid_courses`
JOIN `regions`
ON `first_aid_courses`.`region_id` = `regions`.`id`
LEFT JOIN `first_aid_bookings`
ON `first_aid_courses`.`id` =
`first_aid_bookings`.`first_aid_course_id`
LEFT JOIN `first_aid_attendees`
ON `first_aid_bookings`.`id` =
`first_aid_attendees`.`first_aid_booking_id`
WHERE ( `first_aid_courses`.`location` LIKE '%$search_string%'
OR `first_aid_courses`.`venue` LIKE '%$search_string%' )
AND `first_aid_courses`.`deleted` = 0
AND ( `first_aid_bookings`.`cancelled` = 0
AND `first_aid_bookings`.`confirmed` = 1 )
GROUP BY `first_aid_courses`.`id`
ORDER BY `course_date` ASC
This is completely untested, but maybe try selecting a count of non-null rows for bookings and attendees, like this:
SUM(IF(`first_aid_bookings`.`id` IS NOT NULL, 1, 0)) AS `bookings`,
COUNT(IF(`first_aid_attendees`.`id` IS NOT NULL, 1, 0)) AS `attendees`
Unless you have it but just do not show it, have a good look on indexes, without them you loose an order of magnitude on performance on any query that references anything but primary key.
Another major performance hit are the LIKE '%nnn%'.
Would it be possible to do something with those?
But with some good indexes, this query should be fine if you have the hardware to back it up.
I have queries doing LIKE on tables with millions of rows. its not a problem if the rest of the query can eliminate any unnecessary matchings.
You could go for subqueries to lessen the scope for the LIKE queries.