I have joined a new company where I observed the below use case.
Use case :- A table has around 500 GB of data. Data is user action events for each and every user activity. Purpose is to analyse the activity count
for different permutation and combination for any given date range. So data is further supplied to elastic(and lucene in different similar scenario use case).
My understanding is for this kind of scenario DB in itself should be sufficient.But When I try to query the DB for specific permutation and combination for given data range its damn slow and most of the time gets times out.
But when I to fetch same combination with elastic(or lucene), it is much faster. There is no full text search support required here.
Not sure what is causing the elastic(or lucene) to be much faster than SQL based DB even for regular(not full text) search ?
what can be the probable reason for the same ? I can think of two reasons here
Elastic(or lucene) keeps the data in compressed form. So may be it is quicker to search here ?
Elastic may help to achieve the parallelism with data kept in multiple shards by default. But in lucene case tI do not even see any parallelism.
Related
So I want to create a table in the frontend where I will list every single user. The thing is that the tables are relational and I have to get data from multiple tables in order to fulfill my goal.
Now here comes my question (keep in mind I have a MySQL database) :
Which method is better on the long run :
Generate joined queries that fetch all the data from each table where a user has any information (it outputs ~80 column per row and only 15 of them are needed)
Fetch the data that I need with multiple queries and then just "stick" the values together and output them (15 columns and all of them are needed, but I have to do extra work)
I would suggest you to go for third option.
Generate joined queries that fetch only necessary 15 columns for your front end. It would be the most efficient way.
If you are facing challenges with joining the tables then you can share table structures with sample data and desired output here with your query. We can try to help you achieve your goal.
This is a bit long for a comment.
I don't understand your first option. Why would you be selecting columns that you don't need? If there are 15 columns that you specifically want, then select those columns and nothing else.
In general, it is faster to have the database do most of the work. It can take advantage of its optimizer to produce the best execution plan that it can.
From Experience with embedded hardware mysql server.
If the hardware can do it and has enough resources you let the databse server run it course, as it can run its optimizer.
But if the server hardware lags on some fronts, you transpport all data to the client and let it run Javascript on all returned data.
The same goes for bandwith of the internet connection, it is slow, you want lesser number of rows, to transport because that the user will notice it, even old smartphones have to much power in cpu, amd can so handle everything with easy what you through at them.
In Basic there is no sime answer, you have to check server hardware and the usual bandwith offered and then program a solution that works best
A simple Rule of Thumb:
Fewer round-trips to the database server is usually the faster alternative.
I have never used apc_store() before, and I'm also not sure about whether to free query results or not. So I have these questions...
In a MySQL Query Cache article here, it says "The MySQL query cache is a global one shared among the sessions. It caches the select query along with the result set, which enables the identical selects to execute faster as the data fetches from the in memory."
Does using free_result() after a select query negate the caching spoken of above?
Also, if I want to set variables and arrays obtained from the select query for use across pages, should I save the variables in memory via apc_store() for example? (I know that can save arrays too.) And if I do that, does it matter if I free the result of the query? Right now, I am setting these variables and arrays in an included file on most pages, since they are used often. This doesn't seem very efficient, which is why I'm looking for an alternative.
Thanks for any help/advice on the most efficient way to do the above.
MySQL's "Query cache" is internal to MySQL. You still have to perform the SELECT; the result may come back faster if the QC is enabled and usable in the situation.
I don't think the QC is what you are looking for.
The QC is going away in newer versions. Do not plan to use it.
In PHP, consider $_SESSION. I don't know whether it is better than apc_store for your use.
Note also, anything that is directly available in PHP constrains you to a single webserver. (This is fine for small to medium apps, but is not viable for very active apps.)
For scaling, consider storing a small key in a cookie, then looking up that key in a table in the database. This provides for storing arbitrary amounts of data in the database with only a few milliseconds of overhead. The "key" might be something as simple as a "user id" or "session number" or "cart number", etc.
We have huge cosmosDB container with billions of rows and almost 300 columns. Data is partitioned and modeled in a way we query it most of the time.
For example : User table is partitioned by userId thats why below query works fine.
Select * from User where userId = "user01234"
But in some cases, we need to query data differently that need sorting and then query.
For example : Get data from User Table using userpost and date of post
Select * from user where userPostId = "P01234" orderBy date limit 100
This query takes lot of time because of the size of data and data is not partitioned based on query2 (user Post).
My question is - How can we make query2 and other similar queries faster when data is not partitioned accordingly.
Option 1: "Create separate collection which is partitioned as per Query2" -
This will make query faster but for any new query we will end up creating a new collection, which is duplication of billions of records. [Costly Option]
Option 2: "Build elastic search on top of DB?" This is time consuming option and may be over killing for this slow query problem.
Is there any other option that can be used? Let me know your thoughts.
Thanks in advance!
Both options are expensive. The key is deciding which is cheaper, including running the cross-partition query. This will require you costing each of these options out.
For the cross-partition query, capture the RU charge in the response object so you know the cost of it.
For change feed, this will have an upfront cost as you run it over your existing collection, but whether that cost remains high depends on how much data is inserted or updated each month. Calculating the cost to populate your second collection will take some work. You can start by measuring the RU Charge in the response object when doing an insert then multiply by the number of rows. Calculating how much throughput you'll need will be a function of how quickly you want to populate your second collection. It's also a function of how much compute and how many instances you use to read and write the data to the second collection.
Once the second collection is populated, Change Feed will cost 2 RU/s to poll for changes (btw, this is configurable) and 1 RU/s to read each new item. The cost of inserting data into a second collection costs whatever it is when you measured it earlier.
If this second query doesn't get run that often and your data doesn't change that much, then change feed could save you money. If you run this query a lot and your data changes frequently too, change feed could still save you money.
With regards to Elastic Search or Azure Search, I generally find this can be more expensive than keeping the cross-partition query or change feed. Especially if you're doing it to just answer a second query. Generally this is a better option when you need true free text query capabilities.
A third option you might explore is using Azure Synapse Link and then run both queries using SQL Serverless or Spark.
Some other observations.
Unless you need all 300 properties in these queries you run, you may want to consider shredding these items into separate documents and storing as separate rows. Especially if you have highly asymmetric update patterns where only a small number of properties get frequently updated. This will save you a ton of money on updates because the smaller the item you update, the cheaper (and faster) it will be.
The other thing I would suggest is to look at your index policy and exclude every property that is not used in the where clause for your queries and include properties that are. This will have a dramatic impact on RU consumption for inserts. Also take a look at composite index for your date property as this has a dramatic impact on queries that use order by.
I use mysql as my main database and I sync some data to elasticsearch to make use of features like fuzzy search and aggregations. However, this problem can be applied to and couple of relational and non-relational databases.
When user searches something, I make query to elastic, get ids (primary keys in mysql) and make another query to mysql database, where I filter by ids that were returned from elastic. I use this approach as you often need to load some additional data from relational database, and it would be hell to maintain these relations inside document-based elastic (e.g. load user with comment).
Problem is, same filters will not be applied to elastic query and mysql query. In above example, what if you need to filter comments by some user param - that filter will be applied to mysql query, but not elastic. If same filters won't be applied, pagination will mismatch - 2nd page in mysql can be 4th in elastic. If I take all of the ids from elastic (no pagination), I am afraid of a long response time and clusters failing + you can't get more than 10K records from elastic without scroll api.
I need a conceptual solution here, not actual query examples. Feel free to suggest totaly different approach altogether. Also, I don't need a perfect pagination match, since mysql will do pagination anyway. If elastic needs to get more records, it's fine, I just don't want to couse too heavy load.
Im afraid there is no general solution for the problem you are explaining . It varies by your response time expectations; size of data etc.
For example,
If you can ensure that one side of JOIN data will be much lesser - you could change join direction; First do the query on mySQL and then do an id based terms search in ES.
Consider using database embedded search like postgres depending on how complex your queries are and other features of ES you are leveraging
When setting up a MySQL / ElasticSearch combo, is it better to:
Completely sync all model information to ES (even the non-search data), so that when a result is found, I have all its information handy.
Only sync the searchable fields, and then when I get the results back, use the id field to find the actual data in the MySQL database?
The Elasticsearch model of data prefers non-normalized data, usually. Depending on the use case (large amount of data, underpowered machines, too few nodes etc) keeping relationships in ES (parent-child) to mimic the inner joins and the like from the RDB world is expensive.
Your question is very open-ended and the answer depends on the use-case. Generally speaking:
avoid mimicking the exact DB Tables - ES indices plus their relationships
advantage of keeping everything in ES is that you don't need to update both mechanisms at the same time
if your search-able data is very small compared to the overall amount of data, I don't see why you couldn't synchronize just the search-able data with ES
try to flatten the data in ES and resist any impulse of using parent/child just because this is how it's done in MySQL
I'm not saying you cannot use parent/child. You can, but make sure you test this before adopting this approach and make sure you are ok with the response times. This is, anyway, a valid advice for any kind of approach you choose.
ElasticSearch is a search engine. I would advise you to not use it as a database system. I suggest you to only index the search data and a unique id from your database so that you can retrieve the results from MySQL using the unique key returned by ElasticSearch.
This way you'll be using both applications for what they're intended. Elastic search is not the best for querying relations and you'll have to write lot more code for operating on related data than simply using MySql for it.
Also, you don't want to tie up your persistence layer with search layer. These should be as independent as possible, and change in one should not affect the other, as much as possible. Otherwise, you'll have to update both your systems if either has to change.
Querying MySQL on some IDs is very fast, so you can use it and leave the slow part (querying on full text) to elastic search.
Although it's depend on situation, I would suggest you to go with #2:
Faster when indexing: we only fetch searchable data from DB and index to ES, compare to fetch all and index all
Smaller storage size: since indexed data is smaller than #1, it's more easier to backup, restore, recover, upgrade your ES in production. It'll also keep your storage size small when your data growing up, and you can also consider to use SSD to enhance performance with lower cost.
In general, a search app will search on some fields and show all possible data to user. E.g searching for products but will show pricing/stock info.. in result page, which only available in DB. So it's nature to have a 2nd step to query for extra info in DB and combine it with search results to display.
Hope it help.