I would like to know how to correctly format JSON cache rules to MaxScale. I need to store multiple tables for multiple databases and multiple users, how to correctly format this?
Here, i can store one table on one database and use it for one user.
{
"store": [
{
"attribute": "table",
"op": "=",
"value": "databse_alpha.xhtml_content"
}
],
"use": [
{
"attribute": "user",
"op": "=",
"value": "'user_databse_1'#'%'"
}
]
}
I need to create rule to store multiple databases for multiple users like table1 and table2 being accessed by user1, table3 and table4 being accessed by user2...and son on.
Thanks.
In Maxscale 2.1 it is only possible to give a single pair of store/use values for the cache filter rules.
I took the liberty of opening a feature request for MaxScale on the MariaDB Jira as it appears this functionality was not yet requested.
I think that as workaround you should be able to create two cache filters, with a different set of rules, and then in your service use
filters = cache1 | cache2
Note that picking out the table using exact match, as in your sample above, implies that the statement has to be parsed, which carries a significant performance cost. You'll get much better performance if no matching is needed or if it is performed using regular expressions.
Related
how to convert my mysql table result into json Object in database level
for example ,
SELECT json_array(
group_concat(json_object( name, email))
FROM ....
it will produce the result as
[
{
"name": "something",
"email": "someone#somewhere.net"
},
{
"name": "someone",
"email": "something#someplace.com"
}
]
but what i need is i need to given my own query which may contains functions, subqueries etc.
like in postgres select row_to_json(select name,email,getcode(branch) from .....) then i will get the whole result as json object
in mysql is there any possibilities to do like this?
select jsonArray(select name,email,getcode(branch) from .....)
I only found in official Mysql 8 and 5.7 documentation that it supports casting to JSON type. It includes a JSON_ARRAY function in MySQL 8 and 5.7, and JSON_ARRAYAGG function in MySQL 8. Please see the full JSON functions reference here.
It means that does not exist an easy mysql built-in solution to the problem.
Fortunately, our colleagues started a similar discussion here. Maybe you could find your solution there.
For one searching for well-defined attributes JSON casting, the solution is here.
The problem I am trying to solve is to have a MySQL query return the Keys within a JSON that has the value "stage":"Functioning". I've been trying to work this problem out for a while now but can't find a solution.
It's worth adding as well that there will be several JSONs within the database table.
One of the JSONs is as follows:
{
"1493510400":{
"stage":"Resting",
"dateAltered":"true"
},
"1508716800":{
"stage":"Functioning",
"dateAltered":"true"
},
"1522713600":{
"stage":"Functioning",
"dateAltered":"true",
"stageAltered":"true"
},
"1537315200":{
"stage":"Functioning",
"stageAltered":"true"
},
"1551916800":{
"stage":"Resting",
"stageAltered":"true"
},
"1566518400":{
"stage":"Resting",
"stageAltered":"true"
},
"1581120000":{
"stage":"Functioning"
},
"1595721600":{
"stage":"Resting"
}
}
Each level has a timestamp as its key. Within each level, there will always be a value for 'stage'. These are only ever going to be Resting or Functioning. From here, I need to create a query that will return all the keys (timestamps) that have a value "stage":"Functioning".
I've got SELECT JSON_KEYS(field_value) working fine and it returns an array of the timestamps but when I try adding on some WHERE statements, it fails to return anything.
Some of the queries I've tried are:
SELECT JSON_KEYS(field_value) WHERE JSON_EXTRACT(field_value, '$*.stage') = 'Functioning'
SELECT JSON_KEYS(field_value) WHERE JSON_CONTAINS(field_value, '{"stage":"Functioning"}', '$.stage')
I know I'm probably being an idiot here so apologies if this feels like a waste of time to people but I'm totally stumped right now.
Is what I'm trying to accomplish even possible within MySQL?
My current version of MySQL is 5.7.22
Thanks
I'm afraid your version doesn't support that functionality. If you can upgrade to MySQL 8.0.11, you can use JSON_TABLE, JSON_SEARCH or something similar, depending on your individual case (honestly no clue what exactly you want to achieve based on your post).
I have Cassandra DB with data that has TTL of X hour's for every column value and this needs to be pushed to ElasticSearch Cluster real time.
I have seen past posts on StackOverflow that advise using tools such as LogStash or pushing data directly from application layer.
However, How can one preserve the TTL of the data imported once the data is copied in ES Version >=5.0?
There was once a field called _ttlwhich has been deprecated in ES 2.0 and removed in ES 5.0.
As of ES 5, there are now two official ways of preserving the TTL of your data. First make sure to create a TTL field in your ES documents that would be set to the creation date of your row in Cassandra + the TTL seconds. So if in Cassandra you have a record like this:
INSERT INTO keyspace.table (userid, creation_date, name)
VALUES (3715e600-2eb0-11e2-81c1-0800200c9a66, '2017-05-24', 'Mary')
USING TTL 86400;
Then in ES, you should export the following document to ES:
{
"userid": "3715e600-2eb0-11e2-81c1-0800200c9a66",
"name": "mary",
"creation_date": "2017-05-24T00:00:00.000Z",
"ttl_date": "2017-05-25T00:00:00.000Z"
}
Then you can either:
A. Use a cron that will regularly perform a delete by query based on one of your ttl_date field, i.e. call the following command from your cron:
curl -XPOST localhost:9200/your_index/_delete_by_query -d '{
"query": {
"range": {
"ttl_date": {
"lt": "now"
}
}
}
}'
B. Or use time-based indices and insert each document in an index matching it's ttl_date field. For instance, the above document would be inserted in the index named your_index-2017-05-25. Then with the curator tool you can easily delete indices that have expired.
I have a table with 10 columns. Each row in the table was is originally a JSON Object that I receive in this format.
{"mainEntity":
"atlasId": 1234567
"calculatedGeography": False
"calculatedIndustry" : False
"geography": "G:6J"
"isPublic" = False
"name" = XYZ, Inc
"permId" = 12345678987
primaryRic=""
type=corporation
}
I am using jdbc and a mysql driver. The problem is my insert statements look very long and ugly(see example below) because of the high number of columns. Is there a way to solve this or is this the only way. Also, is there a way to insert multiple records at the same time with jdbc?
"INSERT INTO table_name VALUES(1234567, False, False, "G:6J", False, "XYZ, Inc", 12345678987, "", corporation"
Are you only wondering about style or also performance? always use prepared statements when you make inserts, this will unclutter your code and make sure the datatypes are all correct.
If it is about speed, you might try transactions, or even "load data infile". The load data method requires you to make a temporary CSV file that is directly loader into the database.
I'm attempting to write a stored procedure in MySql that will take a single parameter, and then check that parameter for any text that contains 'DROP','INSERT','UPDATE','TRUNCATE', etc., pretty much anything that isn't a SELECT statement. I know it's not ideal, but, unfortunately the SELECT statement is being built client-side, and to prevent some kind of man-in-the-middle change, it's just an added level of security from the server.
I've tried doing several means of accomplishing it, but, it's not working for me. I've come up with things similar to this:
CREATE PROCEDURE `myDatabase`.`execQuery` (in INC_query text)
BEGIN
#check to see if the incoming SQL query contains INSERT, DROP, TRUNCATE,
#or UPDATE as an added measure of security
IF (
SELECT LOCATE(LOWER(INC_query),'drop') OR
SELECT LOCATE(LOWER(INC_query),'truncate') OR
SELECT LOCATE(LOWER(INC_query),'insert') OR
SELECT LOCATE(LOWER(INC_query),'update') OR
SELECT LOCATE(LOWER(INC_query),'set')
>= 1)
THEN
SET #command = INC_query;
PREPARE statement FROM #command;
EXECUTE statement;
ELSE
SELECT * FROM database.otherTable; #just a generic output to know the procedure executed correctly, and will be removed later. Purely testing.
END IF;
END
Even if it contains any of my "filterable" words, it still executes the query. Any help would be appreciated, or if there's a better way of doing something, I'm all ears.
What if you have a column called updated_at or settings? You can't possibly expect this to work as you intend. This kind of technique is the reason there's so many references to clbuttic on the web.
You're really going to make a mess of things if you go down this road.
The only reasonable way to approach this is to send in the parameters for the kind of query you want to construct, then construct the query in your application using a vetted white list of allowed terms. An example expressed in JSON:
{
"select" : {
"table" : "users",
"columns" : [ "id", "name", "DROP TABLE users", "SUM(date)", "password_hash" ],
"joins" : {
"orders" : [ "users.id", "orders.user_id" ]
}
}
You just need to create a query constructor that emits this kind of thing, and another that converts it back into a valid query. You might want to list only particular columns for querying, as certain columns might be secret or internal only, not to be disclosed, like password_hash in this example.
You could also allow for patterns like (SUM|MIN|MAX|AVG)\((\w+)\) to capture specific grouping operations or JOIN conditions. It depends on how far you want to take this.