I'm trying to update my column because I'm working on a "Kick from Team" function. I've tried different solutions that I've found on google.
My first attempt was this:
UPDATE table SET aMembers = JSON_REMOVE(aMembers, '$[1]') WHERE id = 1
aMembers looks like (Column type: JSON) :
[1, 2, 8, 99, 12, 233, 819]
That works somewhat. It'll remove the given index from aMembers. This is not what I'm after tho. I'm after something that'll remove value 1 from aMembers.
Alright, then I tried this one:
UPDATE table SET aMembers = JSON_REMOVE(aMembers, JSON_UNQUOTE(JSON_SEARCH(aMembers, 'one', '1'))) WHERE id = 1
This sets my whole column as NULL which is also not what I'm looking for. Am I doing this wrong or is this just not possible? Is there a query that'll remove id 1 from my column or am I forced to
1. With js - Get column aMembers
2. Find out at which index ID 1 is at
3. Create a new query that'll remove index X
For DB I am using MariaDB.
For my frontend I am using NextJS.
Backend is NodeJS.
I chose to solve this by using index. Each time I print the members out they will be printed in the same positions as in the JSON column which means that I can simply use index to remove x member from the column.
So each child, when printed out, get's an Index prop and this prop is being set to my api that kicks them. This then updates the JSON in mysql table row to remove index x from the column.
Related
As part of a tool I am creating for my team I am connecting to an internal web service via PowerQuery.
The web service returns nested JSON, and I have trouble parsing the JSON data to the format I am looking for. Specifically, I have a problem with extracting the content of records in a column to a comma separated list.
The data
As you can see, the data contains details related to a specific "race" (race_id). What I want to focus on is the information in the driver_codes which is a List of Records. The amount of records varies from 0 to 4 and each record is structured as id: 50000 (50000 could be any 5 digit number). So it could be:
id: 10000
id: 20000
id: 30000
As requested, an example snippet of the raw JSON:
<race>
<race_id>ABC123445</race_id>
<begin_time>2018-03-23T00:00:00Z</begin_time>
<vehicle_id>gokart_11</vehicle_id>
<driver_code>
<id>90200</id>
</driver_code>
<driver_code>
<id>90500</id>
</driver_code>
</race>
I want it to be structured as:
10000,20000,30000
The problem
When I choose "Extract values" on the column with the list, then I get the following message:
Expression.Error: We cannot convert a value of type Record to type
Text.
If I instead choose "Expand to new rows", then duplicate rows are created for each unique driver code. I now have several rows per unique race_id, but what I wanted was one row per unique race_id and a concatenated list of driver codes.
What I have tried
I have tried grouping the data by the race_id, but the operations allowed when grouping data do not include concatenating rows.
I have also tried unpivoting the column, but that leaves me with the same problem: I still get multiple rows.
I have googled (and Stack Overflowed) this issue extensively without luck. It might be that I am using the wrong keywords, however, so I apologize if a duplicate exists.
UPDATE: What I have tried based on the answers so far
I tried Alexis Olson's excellent and very detailed method, but I end up with the following error:
Expression.Error: We cannot convert the value "id" to type Number. Details:
Value=id
Type=Type
The error comes from using either of these lines of M code (one with a List.Transform and one without):
= Table.Group(#"Renamed Columns", {"race_id", "begin_time", "vehicle_id"},
{{"DriverCodes", each Text.Combine([driver_code][id], ","), type text}})
= Table.Group(#"Renamed Columns", {"race_id", "begin_time", "vehicle_id"},
{{"DriverCodes", each Text.Combine(List.Transform([driver_code][id], each Number.ToText(_)), ","), type text}})
NB: if I do not write [driver_code][id] but only [id] then I get another error saying that column [id] does not exist.
Here's the JSON equivalent to the XML example you gave:
{"race": {
"race_id": "ABC123445",
"begin_time": "2018-03-23T00:00:00Z",
"vehicle_id": "gokart_11",
"driver_code": [
{ "id": "90200" },
{ "id": "90500" }
]}}
If you load this into the query editor, convert it to a table, and expand out the Value record, you'll have a table that looks like this:
At this point, choose Expand to New Rows, and then expand the id column so that your table looks like this:
At this point, you can apply the trick #mccard suggested. Group by the first columns and aggregate over the last using, say, max.
This last step produces M code like this:
= Table.Group(#"Expanded driver_code1",
{"Name", "race_id", "begin_time", "vehicle_id"},
{{"id", each List.Max([id]), type text}})
Instead of this, you want to replace List.Max with Text.Combine as follows:
= Table.Group(#"Changed Type",
{"Name", "race_id", "begin_time", "vehicle_id"},
{{"id", each Text.Combine([id], ","), type text}})
Note that if your id column is not in the text format, then this will throw an error. To fix this, insert a step before you group rows using Transform Tab > Data Type: Text to convert the type. Another options is to use List.Transform inside your Text.Combine like this:
Text.Combine(List.Transform([id], each Number.ToText(_)), ",")
Either way, you should end up with this:
An approach would be to use the Advanced Editor and change the operation done when grouping the data directly there in the code.
First, create the grouping using one of the operations available in the menu. For instance, create a column"Sum" using the Sum operation. It will give an error, but we should get the starting code to work on.
Then, open the Advanced Editor and find the code corresponding to the operation. It should be something like:
{{"Sum", each List.Sum([driver_codes]), type text}}
Change it to:
{{"driver_codes", each Text.Combine([driver_codes], ","), type text}}
I have a database with 2 tables that look like this:
content
id name
1 Cool Stuff
2 Even Better stuff
--
contentFields
id content label value
5 1 Rating Spectacular
6 1 Info Top Notch
7 2 Rating Poor
As you can see the content column of the contentFields table coincides with the id column of the content table.
I want to write a query that grabs all of the content and stores the applicable content fields with the right content, so that it comes out to this:
[
{
id: 1,
name: 'Cool Stuff',
contentFields: [
{label: 'Rating', value: 'Spectacular'},
{label: 'Info', value: 'Top Notch'}
]
},
{
id: 2,
name: 'Even Better Stuff',
contentFields: [
{label: 'Rating', value: 'Poor'}
]
}
]
I tried an inner join like this:
SELECT * FROM content INNER JOIN contentFields ON content.id = contentFields.content GROUP BY content.id
But that didn't do it.
*Note: I know that I could do this with 2 seperate queries, but I want to find out how to do it in one as that will dramatically improve performance.
What you are trying to achieve is not directly possible with SQL only.
As you have already stated yourself, you are looking for a table within a table. But MySQL does not know about such concepts, and as far as I know, other databases also don't. A result set is always like a table; every row of the result set has the same structure.
So either you let your GROUP BY content.id in place; then, for every row in the result set, MySQL will select a random row from the joined table which fits to that row's content.id (you even can't rely on that it is the same row every time).
Or you remove the GROUP BY; then you will get every row from the joined table, but that is not what you want as well.
When performance is an issue, I would probably choose the second option, adding ORDER BY content.id, and generate the JSON myself. You could do so by looping through the result set and begin a new JSON block every time the content.id changes.
Disclaimer The following is pure speculation.
I don't know anything about node.js and how it transforms result sets into JSON. But I strongly assume that you can configure its behavior; otherwise, it actually would not be of any use in most cases. So there must be a method to tell it how it should group the rows from a result set.
If I am right, you would first have to tell node.js how to group the result set and then let it process the rows from the second option above (i.e. without the GROUP BY).
Using Rails 4, Ruby 2, MySql
I would like to find all the records in my database which are repeats of another record - but not the original record itself.
This is so I can update_attributes(:duplicate => true) on each of these records and leave the original one not marked as a duplicate.
You could say that I am looking for the opposite of Uniq* I don't want the Uniq values, I want all the values which are not uniq after the fact. I don't want all values which have a duplicate as that would include the original.
I don't mind using pure SQL or Ruby for this but I would prefer to use active record to keep it Railsy.
Let's say the table is called "Leads" and we are looking for those where the field "telephone_number" is the same. I would leave record 1 alone and mark 2,3 and 4 as duplicate = true.
* If I wanted the opposite of Uniq I could do something like Find keep duplicates in Ruby hashes
b = a.group_by { |h| h[:telephone_number] }.values.select { |a| a.size > 1 }.flatten
But that is all the records, I want all the duplicated ones other than the original one I'm comparing it to.
I'm assuming your query returns all 'Leads' that have the same telephone number in an array b. You can then use
b = b.shift
which takes the first element off of the b array. Then you can continue with your original thought update_attributes(:duplicate => true)
This could by an x/y problem, so if there's a better approach altogether, I'd love to hear it. The summary of the problem starts at the last code block, so skip to that if you want and come back to the details if needed.
I am building a content manager (if nothing else, for the experience). To get started understanding what data I need, I made a "pages" table with this structure:
id (page id) | path (where it is found) | title | content | (etc.. some other stuff)
So, it is the content area that is trouble. Let me explain the end result: I need a content map that has an object of positions, each which are arrays of content that belongs in that position. Here's a sample:
{ header: [5], main: [4,1], footer: [2,8,9] }
Those id's will then go to a content table, pull each item (like id 5) and replace the id with the actual content/settings for that item. That isn't really as relevant for now.
I can't just store the json right to the db in the content field of "pages" because if I were to delete content item "5", it would still be in the json for that page. I need to be able to delete content item 5, and it automatically be removed from wherever it is used.
That lead me to this:
I create another table that tracks where content items are used and the order. Here's the structure for that table (content_locations):
pageId (what page this content is on) | contentId (which content) | position | order
So, I think that gets me on the right track on being able to delete things... if I delete a page, I believe I can set it up to delete the rows it has in content_locations and also set it up so that removing a content item will remove the content_locations rows for that item. I honestly haven't tried that yet, but I'm pretty sure that's possible. If not, I'm really lost :)
My main issue seems to be the ordering. Consider this set of data:
pageId, contentId, position, order
2, 6, header, 0
2, 1, header, 1
2, 4, header, 2
How could I insert a first item (can't insert before 0) or what if I wanted to insert in between one of those (1 1/2) or what if I deleted item 1? I run into a big problem with reordering. Is this a problem with my idea of how to structure the data, or is there a good solution for dealing with an ordering column such as that?
Other issues notwithstanding, I'll just comment on reordering...
If you need to make space to insert a new row, you can easily move the other rows out of the way by:
UPDATE your_table
SET order = order + 1
WHERE pageId = ... AND order > 0
(Replace 0 with the actual position at which you want to insert the new row.)
You can do the opposite after delete, or you can just leave a hole - these holes can be easily "collapsed" at the presentation level.
Unless you have a large number of rows per page, this should be reasonably quick. If not, consider leaving holes in advance, and moving elements only if the hole is completely filled.
BTW, to switch two rows, you can do something like this:
UPDATE your_table
SET order =
CASE order
WHEN 2 THEN 3
WHEN 3 THEN 2
END
WHERE pageId = ... AND order IN (2, 3)
(Replace 2 and 3 with actual positions.)
I think the correct way to do this is to have an after column that references the pageId to say that this is after that. Although that's going to make for some complex SQL. Alternatively just start the order off as 0, 1000, 2000, which gives space to insert things.
One way to do the insert is to multiply every order value by two and add one -- this is a fairly trivial query. Then you have space to insert, so your table becomes:
pageId, contentId, position, order
2, 6, header, 1
2, 1, header, 3
2, 4, header, 5
and then you just insert the order as (order * 2) and it's guaranteed to have a space.
Periodically you'll have to collapse the numbering in order to stop overflowing the values -- but if you check if there's a space beforehand then this should be rare. (You can use ROW_NUMBER to do the renumbering).
I wouldn't embed JSON in the database at all if you can avoid it, parse it and add it to separate tables to make your life easier later.
I am trying to make use of the mobile device lookup data in the WUFL database at http://wurfl.sourceforge.net/smart.php but I'm having problems getting my head around the MySQL code needed (I use Coldfusion for the server backend). To be honest its really doing my head in but I'm sure there is a straightforward approach to this.
The WUFL is supplied as XML (approx 15200 records to date), I have the method written that saves the data to a MySQL database already. Now I need to get the data back out in a useful way!
Basically it works like this: firstly run a select using the userAgent data from a CGI pull to match against a known mobile device (row 1) using LIKE; if found then use the resultant fallback field to look up the default data for the mobile device's 'family root' (row 2). The two rows need to be combined by overwriting the contents of (row 2) with the specific mobile device's features of (row 1). Both rows contain NULL entries and not all the features are present in (row 1).
I just need the fully populated row of data returned if a match is found. I hope that makes sense, I would provide what I think the SQL should look like but I will probably confuse things even more.
Really appreciate any assistance!
This would be my shot at it in SQL Server. You would need to use IFNULL instead of ISNULL:
SELECT
ISNULL(row1.Feature1, row2.Feature1) AS Feature 1
, ISNULL(row1.Feature2, row2.Feature2) AS Feature 2
, ISNULL(row1.Feature3, row2.Feature3) AS Feature 3
FROM
featureTable row1
LEFT OUTER JOIN featureTable row2 ON row1.fallback = row2.familyroot
WHERE row1.userAgent LIKE '%Some User Agent String%'
This should accomplish the same thing in MySQL:
SELECT
IFNULL(row1.Feature1, row2.Feature1) AS Feature 1
, IFNULL(row1.Feature2, row2.Feature2) AS Feature 2
, IFNULL(row1.Feature3, row2.Feature3) AS Feature 3
FROM
featureTable AS row1
LEFT OUTER JOIN featureTable AS row2 ON row1.fallback = row2.familyroot
WHERE row1.userAgent LIKE '%Some User Agent String%'
So what this does, is takes your feature table, aliases it as row1 to get your specific model features. We then join it back to itself as row2 to get the family features. Then the ISNULL function says "if there is no Feature1 value in row 1 (it's null) then get the Feature1 value from row2".
Hope that helps.