Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am wondering if it is possible to get a random array of ID's from a table but include one in particular?
So say I have 200 rows, I might limit my script to output 20 but one of the rows must include id 2 (for example).
Not sure if this is possible, would appreciate any help received.
select id, if(id = 2, -1, rand()) as sort from my_table order by sort limit 20
Not the final solution, but maybe this thread helps you:
MySQL select 10 random rows from 600K rows fast
By the way: I'd handle the randomstuff within the script (e.g. PHP) with cached (e.g. Memcached) datasets. But that depends on your goal.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 days ago.
Improve this question
I am doing some programming practice and
Going through Dynamic Programming theory. we always come across two points
Optimal Substructure (OSS)
Overlapping Subproblem (OSP)
Any optimization problem with these two characteristics can be solved by using DP techniques (Memoization or Tabulation).
But we know this needs so much practice to identify the kind of problems.
Let's Say we have 4 types
TYPE 1
TYPE2
PROBLEMS
OSS
OSP
ALL DP Problem
NON-OSS
OSP
?
OSS
NON-OSP
?
NON-OSS
NON-OSP
?
e.g. Is there any problem that looks like having NON-Overlapping Subproblem but Optimal Substructure characteristics?
I need your help in listing the problem of each type. These will help me and whoever is reading this get more identifying and then solving the problem.
If you have gone through any problem (Leetcode, CodeChef, SPOJ etc) you think can be fit into '?' category please comment.
Also If you have any link/source to know more about type based on OSS/OSP.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
It might look like a simple question already answered countless times, but I could not find the optimal way(using some db).
I have a list of few thousands keywords(let's say abusive words). Whenever someone posts a message(long sentence or a paragraph), I want to check if the given sentence contains any of the keywords, so that I can block user or take other actions.
I am looking for a db/schema which can solve the above problem and gives response in a few milliseconds(<15ms).
There are many dbs which solves the reverse of the above problem: given the keywords, find documents containing keywords(text search).
Try ClickHouse for your workload.
According to docs:
multiMatchAny(...) returns 0 if none of the regular expressions are matched and 1 if any of the patterns matches. It uses hyperscan library. For patterns to search substrings in a string, it is better to use multiSearchAny since it works much faster.
The length of any of the haystack string must be less than 2^32 bytes.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am trying to decide on what will be more efficient between two different systems for MySQL to handle my data. I can either,
1: Create around 200 tables, each having around 30 rows & 5 columns.
2: Create 1 table, having around 6000 rows & 5 columns.
I am using Laravel for this project and Eloquent will be handling this. Does anybody have any opinions on this matter? I appreciate any/all responses.
Option 2.
For such low row counts the overhead both in terms of programming effort and computation of joining 200(!) tables far outweighs the "flat file" approach. Additionally, MySQL will attempt to cache the entire 6000-row table in RAM, assuming you're not storing massive BLOBs.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I am new at Knime and I have a doubt about the GroupBy node.
I have a data set representing a Shopping Cart, with the following columns
Session Number (integer)
CustomerID (String)
Start Hour
Duration
ClickedProducts
AgeAddress
LastOrder
Payments
CustomerScore
Order
where Order (Char meaning Y=purchase or N = nonpurchase)
I saw in my data set that Session Number can have more than one row, so I used the GroupBy node and grouped by SessionID, but when I see the resulting table, I only see the column I have chosen.
I would like some advice about if I have to aggregate new columns with another node.
Thank you
What exactly is your question? If there is any KNIME example similar to this problem? I don't know any.
The grouping and the prediction can of course be done in KNIME. Use the GroupBy node to group by CustonerID and Session. Values of other fields can be aggregated in various ways. Then use the Partitioning node to partition your data into training and test set. Then use a learner e.g. the Decision Tree Learner node to train a model on the training data. Use the Decision Tree Predictor to use the trained model to predict the test data. Finally use the Scorer node to calculate accuracy and other quality measures. Of course you can also do cross validation in KNIME to score your models.
Hope this helps.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am new to sql query optimization and i would like to know if there is anyone can suggest a profiling and optimization tool that i can use.
I am trying to optimize queries running on mysql.
Thanks for any help.
Learn to use and understand EXPLAIN command.
Turn on slow query log and log query not using an index
Well, the first thing one should do is have MySQL describe your queries through the DESC command. This will allow you to see a detailed execution plan for the query. You should especially be interested in the columns describing what keys are used, as proper key usage can help a lot.
The way to describe a query is to simply prefix it with the DESC keyword. As an example:
DESC SELECT * FROM user WHERE name = 'foo';