I have several already calculated multiple benchmark results (10fold CV each) in which the same learners were applied to the same tasks. I would like to merge these in the sense of a 5-fold repeated 10fold CV and then analyze the resulting average performances. At first I thought I could use mergeBenchmarkResults, but this mlr-own function does not accept identical learner-task combinations. Can you think of another, convenient method to average my CVs? I would like to avoid a recalculation with a RepCV resampling strategy because of the long calculation time.
Best regards and thanks a lot,
Christian
Use mergeBenchmarkResults() with different ID's for the same learner-task combinations. These IDs need to be created during creation of the learners, i.e. the learner IDs.
Related
I am doing a regression on the big 5 personality traits, and how birth order affect those traits. First I am trying to build 5 variables based on surveys that captures those traits. I have thought about creating dummies for each question in the category (trait) and then taking the average, but some of the questions are highly correlated, so the weight would be wrong.
I have made a principal components analysis, which gives me four components with an eigenvalue over one. The problem is that none of them accounts for over 40 pct. Of the variance.
Is there some way that I can merge the four into one variable? It is the dependent variable, so there can only be one.
Otherwise do you have another idea of how the index can be constructed?
It doesn't really make sense to merge your principal components as they are by definition orthogonal/uncorrelated.
I can't advise on how an index could be constructed as I think this requires subject matter expertise, but you might want to consider using a multivariate technique that allows for multiple response variables. See this answer for a possible approach (assuming your response variables are ordinal).
In the physics lab I work in, at the beginning of each experiment (of which we run several per minute) we perform a series of Inserts into a MariaDB database. One of the tables has a few hundred columns - each corresponding to a named variable - and serves as a log of the parameters used during that run. For example, one variable is the laser power used during a particular step of the experiment.
Over time, experimenters add new variables to parametrize new steps of the experiment. Originally my code handled this by simply adding new columns to the table - but as the number rows in the table increased beyond around 60000, the time it took to add a column became unusably long (over a minute).
For now, I have circumvented the problem by pre-defining a few hundred extra columns which can be renamed as new variables are needed. At the rate at which variables are added, however, this will only last our lab (or the other labs that use this software) a few years before table maintenance is required. I am wondering whether anyone can recommend a different architecture or different platform that would provide a natural solution to this "number of columns" problem.
I am guessing you are running various different types of experiments and that is why you need an ever increasing number of variables? If that is the case, you may want to consider either:
having a separate table for each type of experiment,
separate tables to hold each experiment type's parameter values (that reference the experiment in the primary table),
have a simpler "experiment parameters" table that has 3 (or more, depending on complexity of values) references: the experiment, the parameter, and parameter value.
My preference would be to one of the first two options, the third one tends to make data a bit more complicated, to analyze AND maintain, than the flexibility is worth.
It would seem that EAV is best for your scenario. I would always steer away from it, but in this case it seems to make sense. I would keep the last n experiments of data in the main table(s), and dog off the other ones to an archive table. Naturally you would know of the speed increases in archiving away data not needed at the moment, yet always available with joins to larger tables.
For an introduction into EAV, see a web ddocument by Rick James (a stackoverflow User). Also, visit the questions available on the stack here.
Everytime I look at EAV I wonder why in the world would anyone use it to program against. But just imagining the academic/experimental/ad-hoc environment that you must work in, I can't help but think it might be the best one for you. The following is a high-level exploratory question entitled Should I use EAV model?.
Suppose you have a function/method that uses two metric to return a value — essentially a 2D matrix of possible values. Is it better to use logic (nested if/switch statements) to choose the right value, or just build that matrix (as an Array/Hash/Dictionary/whatever), and then the return value becomes simply a matter of performing a lookup?
My gut feeling says that for an M⨉N matrix, relatively small values for both M and N (like ≤3) would be OK to use logic, but for larger values it would be more efficient to just build the matrix.
What are general best practices for this? What about for an N-dimensional matrix?
The decision depends on multiple factors, including:
Which option makes the code more readable and hence easier to maintain
Which option performs faster, especially if the lookup happens squillions of times
How often do the values in the matrix change? If the answer is "often" then it is prob better to externalise the values out of the code and put them in an matrix stored in a way that can be edited simply.
Not only how big is the matrix but how sparse is it?
What I say is that about nine conditions is the limit for an if .. else ladder or a switch. So if you have a 2D cell you can reasonably hard-code the up, down, diagonals, and so on. If you go to three dimensions you have 27 cases and it's too much, but OK if you're restricted to the six cub faces.
Once you've got a a lot of conditions, start coding via look-up tables.
But there's no real answer. For example Windows message loops need to deal with a lot of different messages, and you can't sensibly encode the handling code in look-up tables.
Okay, so first of all let me tell a little about what I'm trying to do. Basically, during my studies I wrote a little webservice in PHP that calculates how similar movies are to each other based on some measurable sizes like length, actors, directors, writers, genres etc. The data I used for this was basically a collection of data accquired from omdbapi.com.
I still have that database, but it is technically just a SINGLE table that contains all the information to each movie. This means, that for each movie all the above mentioned parameters are divided by commas. Therefore I have so far used a query that encapsulates all these things by using LIKE statements. The query can become quite large as I will pretty much query for every parameter within the table, sometimes 5 different LIKE statements for different actors, the same for directors and writers. Back when I last used this, it took about 30 to 60 seconds to enter a single movie and receive a list of 15 similar ones.
Now I started my first job and to teach myself in my freetime, I want to work on my own website. Because I have no real concept for what I want to do with it, I thought I'd get out my old "movie finder" again and use it differently this time.
Now to challenge myself, I want the whole thing to be faster. Understand, that the data is NEVER changed, only read. It is also not "really" relational, as actor names and such are just strings and have no real entry anywhere else. Which essentially means having the same name will be treated as the same actor.
Now here comes my actual question:
Assuming I want my select queries to operate faster, would it make sense to run a script that splits the comma divided strings into extra tables (these are n to m relations, see attempt below) and then JOIN all these tables (they will be 8 or more) or will using LIKE as I currently do be about the same speed? The ONLY thing I am trying to achieve is faster select queries, as there is nothing else to really do with the data.
This is what I currently have. Keep in mind, I would still have to create tables for the relation between movies + each of these tables. After doing that, I could remove the columns in the movie table and would end up having to join a lot of tables with EACH query. The only real advantage I can see here, is that it would be easier to create an index on individuals tables, rather than one (or a few) covering the one, big movie table.
I hope all of this even makes sense to you. I appreciate any answer short or long, like I said this is mostly for self studies and as such, I don't have/need a real business model.
I don't understand what you currently have. It seems that you only showd the size of tables but not its internal structure. You need to separate data into separate tables using normalization rules and then put correct indexes. Indexes will make your queries very fast. What does the sizing above your query mean? Have you ever run EXPLAIN ANALYZE for you queries, and please post the query I cannot guess your query out of the result. There are a lot of optimization videos on YT.
I'm designing a central search function in a PHP web application. It is focused around a single table and each result is exactly one unique ID out of that table. Unfortunately there are a few dozen tables related to this central one, most of them being 1:n relations. Even more unfortunate, I need to join quite a few of them. A couple to gather the necessary data for displaying the results, and a couple to filter according to the search criteria.
I have been mainly relying on a single query to do this. It has a lot of joins in there and, as there should be exactly one result displayed per ID, it also works with rather complex subqueries and group by uses. It also gets sorted according to a user-set sort method and there's pagination in play as well done by the use of LIMIT.
Anyways, this query has become insanely complex and while I nicely build it up in PHP it is a PITA to change or debug. I have thus been considering another approach, and I'm wondering just how bad (or not?) this is for performance before I actually develop it. The idea is as follows:
run one less complex query only filtering according the search parameters. This means less joins and I can completely ignore group by and similar constructs, I will just "SELECT DISTINCT item_id" on this and get a list of IDs
then run another query, this time only joining in the tables I need to display the results (only about 1/4 of the current total joins) using ... WHERE item_id IN (....), passing the list of "valid" IDs gathered in the first query.
Note: Obviously the IN () could actually contain the first query in full instead of relying on PHP to build up a comma-separated list).
How bad will the IN be performance-wise? And how much will it possibly hurt me that I can not LIMIT the first query at all? I'm also wondering if this is a common approach to this or if there are more intelligent ways to do it. I'd be thankful for any input on this :)
Note to clarify: We're not talking about a few simple joins here. There is even (simple) hierarchical data in there where I need to compare the search parameter against not only the items own data but also against its parent's data. In no other project I've ever worked on have I encountered a query close to this complexity. And before you even say it, yes, the data itself has this inherent complexity, which is why the data model is complex too.
My experience has shown that using the WHERE IN(...) approach tends to be slower. I'd go with the joins, but make sure you're joining on the smallest dataset possible first. Reduce down the simple main table, then join onto that. Make sure your most complex joins are saved to the end to minimize the rows required to search. Try to join on indexes wherever possible to improve speed, and ditch wildcards in JOINS where possible.
But I agree with Andomar, if you have the time build both and measure.