MySQL - Selecting related items in a table - mysql

I'm tracking user visits to course pages on our website. I'm doing this so that for any given course (aka product) I can pull up a list of the top other course pages that users visited, who also visited the current page - just like Amazon's "Customers Who Viewed This Item Also Viewed" feature.
What I have is working, but as the data collected continues to grow, the query times are getting considerably slower and slower. I've now got approx 300k records and the queries are taking 2+ seconds each. We're expecting to start trimming the data when we reach about 2M records, but given the performance problems we're currently facing, I don't think this will be possible. I would like to know if there is a better approach to how I'm doing this.
Here's the gory details...
I've got a simple three column InnoDB table containing the user id, course number and a timestamp. The user id and course number fields are indexed, as is the user id/course number combined. Here's the table schema:
CREATE TABLE IF NOT EXISTS `coursetracker` (
`user` varchar(38) NOT NULL COMMENT 'user guid',
`course` char(8) NOT NULL COMMENT 'subject code and course number',
`visited` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'last visited time',
UNIQUE KEY `ndx_user_course` (`user`,`course`),
KEY `ndx_user` (`user`),
KEY `ndx_course` (`course`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='tracking user visits to courses';
Data in the table looks like this:
user | course | visited
=======================================|==========|====================
{00001A4C-1DE0-C4FB-0770-A758A167B97E} | OFFC2000 | 2013-01-19 23:18:03
{00001FB0-179E-1E28-F499-65451E5C1465} | FSCT8481 | 2013-01-30 13:12:29
{0000582C-5959-EF2B-0637-B5326A504F95} | COMP1409 | 2013-01-13 16:09:42
{0000582C-5959-EF2B-0637-B5326A504F95} | COMP2051 | 2013-01-13 16:20:41
{0000582C-5959-EF2B-0637-B5326A504F95} | COMP2870 | 2013-01-13 16:25:41
{0000582C-5959-EF2B-0637-B5326A504F95} | COMP2920 | 2013-01-13 16:24:40
{00012C64-2CA1-66DD-5DDC-B3714BFC91C3} | COMM0005 | 2013-02-18 21:32:36
{00012C64-2CA1-66DD-5DDC-B3714BFC91C3} | COMM0029 | 2013-02-18 21:34:04
{00012C64-2CA1-66DD-5DDC-B3714BFC91C3} | COMM0030 | 2013-02-18 21:34:50
{00019F46-6664-28DD-BCCD-FA6810B4EBB8} | COMP1409 | 2013-01-16 15:48:49
A sample query that I'm using to get the related courses to any given course (COMP1409 in this example), looks like this:
SELECT `course`,
count(`course`) c
FROM `coursetracker`
WHERE `user` IN
(SELECT `user`
FROM `coursetracker`
WHERE `course` = 'COMP1409')
AND `course` != 'COMP1409'
GROUP BY `course`
ORDER BY c DESC LIMIT 10
The results of this query look like this:
course | c
=========|====
COMP1451 | 470
COMP1002 | 367
COMP2613 | 194
COMP1850 | 158
COMP1630 | 156
COMP2617 | 126
COMP2831 | 119
COMP2614 | 95
COMP1911 | 79
COMP1288 | 76
So, everything above works exactly as I'd like, other than the performance. The table is so simple that there's nothing left to index. The SQL query results in the data that I'm looking for. I'm out of ideas on how to do this faster. I'd appreciate any feedback on the approach.

You could try with a join instead:
SELECT c1.`course`,
count(c1.`course`) as c
FROM `coursetracker` c1
INNER JOIN `coursetracker` c2
ON c1.`user` = c2.`user`
WHERE c2.`course` = 'COMP1409'
AND c1.`course` != 'COMP1409'
GROUP BY c1.`course`
ORDER BY c DESC LIMIT 10

hard to tell without seeing your EXPLAIN, but maybe joining the table to itself will be faster?
SELECT `course`, count(`course`) c
FROM `coursetracker` c
INNER JOIN `coursetracker` c2 ON c.user = c2.user
WHERE c2.`course` = 'COMP1409'
AND c.`course` != 'COMP1409'
GROUP BY `course`
ORDER BY c DESC LIMIT 10

Related

Need some help optimising an SQL query

my client was given the following code and he uses it daily to count the messages sent to businesses on his website. I have looked at the MYSQL.SLOW.LOG and it has the following stats for this query, which indicates to me it needs optimising.
Count: 183 Time=44.12s (8073s) Lock=0.00s (0s)
Rows_sent=17337923391683297280.0 (-1), Rows_examined=382885.7
(70068089), Rows_affected=0.0 (0), thewedd1[thewedd1]#localhost
The query is:
SELECT
businesses.name AS BusinessName,
messages.created AS DateSent,
messages.guest_sender AS EnquirersEmail,
strip_tags(messages.message) AS Message,
users.name AS BusinessName
FROM
messages
JOIN users ON messages.from_to = users.id
JOIN businesses ON users.business_id = businesses.id
My SQL is not very good but would a LEFT JOIN rather than a JOIN help to reduce the number or rows returned? Ive have run an EXPLAIN query and it seems to make no difference between the LEFT JOIN and the JOIN..
Basically I think it would be good to reduce the number of rows returned, as it is absurdly big..
Short answer: There is nothing "wrong" with your query, other than the duplicate BusinessName alias.
Long answer: You can add indexes to the foreign / primary keys to speed up searching which will do more than changing the query.
If you're using SSMS (SQL management studio) you can right click on indexes for a table and use the wizard.
Just don't be tempted to index all the columns as that may slow down any inserts you do in future, stick to the ids and _ids unless you know what you're doing.
he uses it daily to count the messages sent to businesses
If this is done per day, why not limit this to messages sent in specific recent days?
As an example: To count messages sent per business per day, for just a few recent days (example: 3 or 4 days), try this:
SELECT businesses.name AS BusinessName
, messages.created AS DateSent
, COUNT(*) AS n
FROM messages
JOIN users ON messages.from_to = users.id
JOIN businesses ON users.business_id = businesses.id
WHERE messages.created BETWEEN current_date - INTERVAL '3' DAY AND current_date
GROUP BY businesses.id
, DateSent
ORDER BY DateSent DESC
, n DESC
, businesses.id
;
Note: businesses.name is functionally dependent on businesses.id (in the GROUP BY terms), which is the primary key of businesses.
Example result:
+--------------+------------+---+
| BusinessName | DateSent | n |
+--------------+------------+---+
| business1 | 2021-09-05 | 3 |
| business2 | 2021-09-05 | 1 |
| business2 | 2021-09-04 | 1 |
| business2 | 2021-09-03 | 1 |
| business3 | 2021-09-02 | 5 |
| business1 | 2021-09-02 | 1 |
| business2 | 2021-09-02 | 1 |
+--------------+------------+---+
7 rows in set
This assumes your basic join logic is correct, which might not be true.
Other data could be returned as aggregated results, if necessary, and the fact that this is now limited to just recent data, the amount of rows examined should be much more reasonable.

SQL where not exists with multiple rows and status

I have the following tables (minified for the sake of simplicity):
CREATE TABLE IF NOT EXISTS `product_bundles` (
bundle_id int AUTO_INCREMENT PRIMARY KEY,
-- More columns here for bundle attributes
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS `product_bundle_parts` (
`part_id` int AUTO_INCREMENT PRIMARY KEY,
`bundle_id` int NOT NULL,
`sku` varchar(255) NOT NULL,
-- More columns here for product attributes
KEY `bundle_id` (`bundle_id`),
KEY `sku` (`sku`)
) ENGINE=InnoDB;
CREATE TABLE IF NOT EXISTS `products` (
`product_id` mediumint(8) AUTO_INCREMENT PRIMARY KEY,
`sku` varchar(64) NOT NULL DEFAULT '',
`status` char(1) NOT NULL default 'A',
-- More columns here for product attributes
KEY (`sku`),
) ENGINE=InnoDB;
And I want to show only the 'product bundles' that are currently completely in stock and defined in the database (since these get retrieved from a third party vendor, there is no guarantee the SKU is defined). So I figured I'd need an anti-join to retrieve it accordingly:
SELECT SQL_CALC_FOUND_ROWS *
FROM product_bundles AS bundles
WHERE 1
AND NOT EXISTS (
SELECT *
FROM product_bundle_parts AS parts
LEFT JOIN products AS products ON parts.sku = products.sku
WHERE parts.bundle_id = bundles.bundle_id
AND products.status = 'A'
AND products.product_id IS NULL
)
-- placeholder for other dynamic conditions for e.g. sorting
LIMIT 0, 24
Now, I sincerely thought this would filter out the products by status, however, that seems not to be the case. I then changed one thing up a bit, and the query never finished (although I believe it to be correct):
SELECT SQL_CALC_FOUND_ROWS *
FROM product_bundles AS bundles
WHERE 1
AND NOT EXISTS (
SELECT *
FROM product_bundle_parts AS parts
LEFT JOIN products AS products ON parts.sku = products.sku
AND products.status = 'A'
WHERE parts.bundle_id = bundles.bundle_id
AND products.product_id IS NULL
)
-- placeholder for other dynamic conditions for e.g. sorting
LIMIT 0, 24
Example data:
product_bundles
bundle_id | etc.
1 |
2 |
3 |
product_bundle_parts
part_id | bundle_id | sku
1 | 1 | 'sku11'
2 | 1 | 'sku22'
3 | 1 | 'sku33'
4 | 1 | 'sku44'
5 | 2 | 'sku55'
6 | 2 | 'sku66'
7 | 3 | 'sku77'
8 | 3 | 'sku88'
products
product_id | sku | status
101 | 'sku11' | 'A'
102 | 'sku22' | 'A'
103 | 'sku33' | 'A'
104 | 'sku44' | 'A'
105 | 'sku55' | 'D'
106 | 'sku66' | 'A'
107 | 'sku77' | 'A'
108 | 'sku99' | 'A'
Example result: Since the product status of product #105 is 'D' and 'sku88' from part #8 was not found:
bundle_id | etc.
1 |
I am running Server version: 10.3.25-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04
So there are a few questions I have.
Why does the first query not filter out products that do not have the status A.
Why does the second query not finish?
Are there alternative ways of achieving the same thing in a more efficient matter, as this looks rather cumbersome.
First of all, I've read that SQL_CALC_FOUND_ROWS * is much slower than running two separate query (COUNT(*) and then SELECT * or, if you make your query inside another programming language, like PHP, executing the SELECT * and then count the number of rows of the result set)
Second: your first query returns all the boundles that doesn't have ANY active products, while you need the boundles with ALL products active.
I'd change it in the following:
SELECT SQL_CALC_FOUND_ROWS *
FROM product_bundles AS bundles
WHERE NOT EXISTS (
SELECT 'x'
FROM product_bundle_parts AS parts
LEFT JOIN products ON (parts.sku = products.sku)
WHERE parts.bundle_id = bundles.bundle_id
AND COALESCE(products.status, 'X') != 'A'
)
-- placeholder for other dynamic conditions for e.g. sorting
LIMIT 0, 24
I changed the products.status = 'A' in products.status != 'A': in this way the query will return all the boundles that DOESN'T have inactive products (I also removed the condition AND products.product_id IS NULL because it should have been in OR, but with a loss in performance).
You can see my solution in SQLFiddle.
Finally, to know why your second query doesn't end, you should check the structure of your tables and how they are indexed. Executing an Explain on the query could help you to find eventual issues on the structure. Just put the keyword EXPLAIN before the SELECT and you'll have your "report" (EXPLAIN SELECT * ....).

Finding collaborations in MySQL

I have a database of people and projects. How can I find the names of people who collaborated with a given person, and on how many projects?
For example, I want to find the collaborators of Jimmy from the database:
+----------+--------+
| project | person |
+----------+--------+
| datamax | Jimmy |
| datamax | Ashley |
| datamax | Martin |
| cocoplus | Jimmy |
| cocoplus | Ashley |
| glassbox | Jimmy |
| glassbox | Martin |
| powerbin | Jimmy |
| powerbin | Ashley |
+----------+--------+
The result would look something like this:
Jimmy's collaborations:
+--------+----------------+
| person | collaborations |
+--------+----------------+
| Ashley | 3 |
| Martin | 2 |
+--------+----------------+
Join the table with itself, group by the person field:
SELECT u2.person, COUNT(u1.project) AS collaborations
FROM users u1
JOIN users u2 ON u2.project = u1.project
WHERE u1.person != u2.person AND u1.person = 'Jimmy'
GROUP BY u2.person;
The query selects the projects in which Jimmy participated from u1. The rows from u2 are filtered by the rows from u1. Duplicate entries, where the users from both tables match, are filtered with WHERE clause. Finally, the result set is grouped by person, and the COUNT function calculates the number of rows per group.
Performance
Note, an index for person and project columns (or two separate indexes) will significantly improve performance of the query above. Specific index configuration depends on the table structure. Although, I think the following is quite enough for a table with two varchar fields for person and project, for instance:
ALTER TABLE users ADD INDEX `project` (`project`(10));
ALTER TABLE users ADD INDEX `person` (`person`(10));
Normalization
However, I would rather store persons and projects in separate tables with their numeric IDs. A third table could play the role of connector: person_id - project_id. In other words, I recommend normalization. With normalized tables, you will not need to build bloated indexes for the text fields.
Normalized tables may look as follows:
CREATE TABLE users (
id int unsigned NOT NULL AUTO_INCREMENT,
name varchar(200) NOT NULL DEFAULT '',
PRIMARY KEY(`id`),
-- This index is needed, if you want to fetch users by names
INDEX name (name(8))
);
CREATE TABLE projects (
id int unsigned NOT NULL AUTO_INCREMENT,
name varchar(100) NOT NULL DEFAULT '',
PRIMARY KEY(`id`)
);
CREATE TABLE collaborations (
project_id int unsigned NOT NULL DEFAULT 0,
user_id int unsigned NOT NULL DEFAULT 0,
PRIMARY KEY(`project_id`, `user_id`)
);
The query for the normalized structures will look a little bit more complex:
-- In practice, the user ID is retrieved from the calling process
-- (such as POST/GET HTTP requests, for instance).
SET #user_id := (SELECT id FROM users WHERE name LIKE 'Jimmy');
SELECT u.name person, COUNT(p.id) collaborations
FROM collaborations c
JOIN collaborations c2 USING(project_id)
JOIN users u ON u.id = c2.user_id
JOIN projects p ON p.id = c2.project_id
WHERE c.user_id = #user_id AND c.user_id != c2.user_id
GROUP BY c2.user_id;
But it will be fast, and the space required for the indexes will be significantly smaller, especially for large data sets.
Original answer
To fetch the total number of projects for each person, use COUNT function with GROUP BY clause:
SELECT person, COUNT(*) AS collaborations
FROM users
GROUP BY person;

MySQL subquery from same table

I have a database with table xxx_facileforms_forms, xxx_facileforms_records and xxx_facileforms_subrecords.
Column headers for xxx_facileforms_subrecords:
id | record | element | title | neame | type | value
As far as filtering records with element = '101' ..query returns proper records, but when i add subquery to filete aditional element = '4871' from same table - 0 records returned.
SELECT
F.id AS form_id,
R.id AS record_id,
PV.value AS prim_val,
COUNT(PV.value) AS count
FROM
xxx_facileforms_forms AS F
INNER JOIN xxx_facileforms_records AS R ON F.id = R.form
INNER JOIN xxx_facileforms_subrecords AS PV ON R.id = PV.record AND PV.element = '101'
WHERE R.id IN (SELECT record FROM xxx_facileforms_records WHERE record = R.id AND element = '4871')
GROUP BY PV.value
Does this looks right?
Thank You!
EDIT
Thank you for support and ideas! Yes, I left lot of un guessing. Sorry. Some input/output table data might help make it more clear.
_facileforms_form:
id | formname
---+---------
1 | myform
_facileforms_records:
id | form | submitted
----+------+--------------------
163 | 1 | 2014-06-12 14:18:00
164 | 1 | 2014-06-12 14:19:00
165 | 1 | 2014-06-12 14:20:00
_facileforms_subrecords:
id | record | element | title | name|type | value
-----+--------+---------+--------+-------------+--------
5821 | 163 | 101 | ticket | radio group | flight
5822 | 163 | 4871 | status | select list | canceled
5823 | 164 | 101 | ticket | radio group | flight
5824 | 165 | 101 | ticket | radio group | flight
5825 | 165 | 4871 | status | select list | canceled
Successful query result:
form_id | record_id | prim_val | count
1 | 163 | flight | 2
So i have to return value data (& sum those records) from those records where _subrecord element - 4871 is present (in this case 163 and 165).
And again Thank You!
Thank You for support and ideas! Yes i left lot of un guessing.. sorry . So may be some input/output table data might help.
_facileforms_form:
headers -> id | formname
1 | myform
_facileforms_records:
headers -> id | form | submitted
163 | 1 | 2014-06-12 14:18:00
164 | 1 | 2014-06-12 14:19:00
165 | 1 | 2014-06-12 14:20:00
_facileforms_subrecords
headers -> id | record | element | title | name | type | value
5821 | 163 | 101 | ticket | radio group| flight
5822 | 163 | 4871 | status | select list | canceled
5823 | 164 | 101 | ticket | radio group | flight
5824 | 165 | 101 | ticket | radio group | flight
5825 | 165 | 4871 | status | select list | canceled
Succesful Query result:
headers -> form_id | record_id | prim_val | count
1 | 163 | flight | 2
So i have to return value data (& sum those records) from those records where _subrecord element - 4871 is present (in this case 163 and 165).
And again Thank You!
No, it doesn't look quite right. There's a predicate "R.id IN (subquery)" but that subquery itself has a reference to R.id; it's a correlated subquery. Looks like something is doubled up there. (We're assuming here that id is a UNIQUE or PRIMARY key in each table.)
The subquery references an identifier element... the only other reference we see to that identifier is from the _subrecords table (we don't see any reference to that column in _records table... if there's no element column in _records, then that's a reference to the element column in PV, and that predicate in the subquery will never be true at the same time the PV.element='101' predicate is true.
Kudos for qualifying the column references with a table alias, that makes the query (and the EXPLAIN output) much easier to read; the reader doesn't need to go digging around in the table definitions to figure out which table does and doesn't contain which columns. But please take that pattern to the next step, and qualify all column references in the query, including column references in the subqueries.
Since the reference to element isn't qualified, we're left to guess whether the _records table contains a column named element.
If the goal is to return only the rows from R with element='4871', we could just do...
WHERE R.element='4871'
But, given that you've gone to the bother of using a subquery, I suspect that's not really what you want.
It's possible you're trying to return all rows from R for a _form, but only for the _form where there's at least one associated _record with element='4871'. We could get that result returned with either an IN (subquery) or an EXISTS (correlated_ subquery) predicate, or an anti-join pattern. I'd give examples of those query patterns; I could take some guesses at the specification, but I would only be guessing at what you actually want to return.
But I'm guessing that's not really what you want. I suspect that _records doesn't actually contain a column named element.
The query is already restricting the rows returned from PV with those that have element='101'.)
This is a case where some example data and the example output would help explain the actual specification; and that would be a basis for developing the required SQL.
FOLLOWUP
I'm just guessing... maybe what you want is something pretty simple. Maybe you want to return rows that have element value of either '101' or '4913'.
The IN comparison operator is a convenient of way of expressing the OR condition, that a column be equal to a value in a list:
SELECT F.id AS form_id
, R.id AS record_id
, PV.value AS prim_val
, COUNT(PV.value) AS count
FROM xxx_facileforms_forms F
JOIN xxx_facileforms_records R
ON R.form = F.id
JOIN xxx_facileforms_subrecords PV
ON PV.record = R.id
AND PV.element IN ('101','4193')
GROUP BY PV.value
NOTE: This query (like the OP query) is using a non-standard MySQL extension to GROUP BY, which allows non-aggregate expressions (e.g. bare columns) to be returned in the SELECT list.
The values returned for the non-aggregate expressions (in this case, F.id and R.id) will be a values from a row included in the "group". But because there can be multiple rows, and different values on those rows, it's not deterministic which of values will be returned. (Other databases would reject this statement, unless we wrapped those columns in an aggregate function, such as MIN() or MAX().)
FOLLOWUP
I noticed that you added information about the question into an answer... this information would better be added to the question as an EDIT, since it's not an answer to the question. I took the liberty of copying that, and reformatting.
The example makes it much more clear what you are trying to accomplish.
I think the easiest to understand is to use EXISTS predicate, to check whether a row meeting some criteria "exists" or not, and exclude rows where such a row does not exist. This will use a correlated subquery of the _subrecords table, to which check for the existence of a matching row:
SELECT f.id AS form_id
, r.id AS record_id
, pv.value AS prim_val
, COUNT(pv.value) AS count
FROM xxx_facileforms_forms f
JOIN xxx_facileforms_records r
ON r.form = f.id
JOIN xxx_facileforms_subrecords pv
ON pv.record = r.id
AND pv.element = '101'
-- only include rows where there's also a related 4193 subrecord
WHERE EXISTS ( SELECT 1
FROM xxx_facileforms_subrecords sx
WHERE sx.element = '4193'
AND sx.record = r.id
)
--
GROUP BY pv.value
(I'm thinking this is where OP was headed with the idea that a subquery was required.)
Given that there's a GROUP BY in the query, we could actually accomplish an equivalent result with a regular join operation, to a second reference to the _subrecords table.
A join operation is often more efficient than using an EXISTS predicate.
(Note that the existing GROUP BY clause will eliminate any "duplicates" that might otherwise be introduced by a JOIN operation, so this will return an equivalent result.)
SELECT f.id AS form_id
, r.id AS record_id
, pv.value AS prim_val
, COUNT(pv.value) AS count
FROM xxx_facileforms_forms f
JOIN xxx_facileforms_records r
ON r.form = f.id
JOIN xxx_facileforms_subrecords pv
ON pv.record = r.id
AND pv.element = '101'
-- only include rows where there's also a related 4193 subrecord
JOIN xxx_facileforms_subrecords sx
ON sx.record = r.id
AND sx.element = '4193'
--
GROUP BY pv.value

MySQL Slow Query Optimisation

I have a database ~800k records showing ticket purchases. All tables are InnoDB. The slow query is:
SELECT e.id AS id, e.name AS name, e.url AS url, p.action AS action, gk.key AS `key`
FROM event AS e
LEFT JOIN participation AS p ON p.event=e.id
LEFT JOIN goldenkey AS gk ON gk.issuedto=p.person
WHERE p.person='139160'
OR p.person IS NULL;
This query is coming from PDO hence quoting of p.person. All columns used in JOINs and WHERE are indexed. p.event is foreign key constrained to e.id and gk.issuedto and p.person are foreign key constrained to an unmentioned table, person.id. All these are INTs. The table e is small - only 10 rows. Table p is ~500,000 rows and gk is empty at this time.
This query runs on a person's details page. We want to get a list of all events, then if there is a participation row their participation and if there is a golden key row then their golden key.
Slow query log gives:
Query_time: 12.391201 Lock_time: 0.000093 Rows_sent: 2 Rows_examined: 466104
EXPLAIN SELECT gives:
+----+-------------+-------+------+---------------+----------+---------+----------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+----------+---------+----------------+------+-------------+
| 1 | SIMPLE | e | ALL | NULL | NULL | NULL | NULL | 10 | |
| 1 | SIMPLE | p | ref | event | event | 4 | msadb.e.id | 727 | Using where |
| 1 | SIMPLE | gk | ref | issuedto | issuedto | 4 | msadb.p.person | 1 | |
+----+-------------+-------+------+---------------+----------+---------+----------------+------+-------------+
This query runs at 7~12 seconds on first run for a given p.person then <0.05s in future. Dropping the OR p.person IS NULL does not improve query time. This query slowed right down when the size of p was increased from ~20k to ~500k (import of old data).
Does anyone have any suggestions on how to improve performance? Remembering overall aim is to retrieve a list of all events, then if there is a participation row their participation and if there is a golden key row then their golden key. If multiple queries will be more efficient I can do that.
If you can do away with p.person IS NULL try the following and see if it helps:
SELECT e.id AS id, e.name AS name, e.url AS url, p.action AS action, gk.key AS `key`
FROM event AS e
LEFT JOIN participation AS p ON (p.event=e.id AND p.person='139160')
LEFT JOIN goldenkey AS gk ON gk.issuedto=p.person
For grins... Add the keyword "STRAIGHT_JOIN" to your select...
SELECT STRAIGHT_JOIN ... rest of query...
I'm not sure how many indexes you have and schema of your table, but try avoid using null values by default, it can slow down your queries dramatically.
If you are doing a lookup for one particular person, which I'm guessing you are since you have the person id filter in there. I would try and reverse the query, so you are first searching though the person table and then making a union to and additional query which gives you all the events.
SELECT
e.id AS id, e.name AS name, e.url AS url,
p.action AS action, gk.key AS `key`
FROM person AS p
JOIN event AS e ON p.event=e.id
LEFT JOIN goldenkey AS gk ON gk.issuedto=p.person
UNION
SELECT
e.id AS id, e.name AS name, e.url AS url,
NULL, NULL
FROM event AS e
This would obviously mean you have a duplicate event in case the first query matches, but thats easily solved by wrapping a select around the whole thing, or maybe by using a variable and selecting the e.id into that in the first query and using that variable in the second query (not sure if this will work though, haven't tested it, cant see why not though).