combinig many drives into one partition - partitioning

So, i wanted to combine many DRIVES(not partitions) into one big partition, but i don't know how to do it. I spent a lot of time trying to find any way to just combine 2 pendrives, but i could not find anything. Maybe there is some software that allows you to do this, or do i need to use a diffrent os to do this. I want it to be on a diffrent pc than my main one, so i can use linux to do this. And i also need to share it using wifi. Does anyone know how to do this? (i might not be responding right after i post this, beacouse i live in the +1 time zone).

Related

How to cache queries at a time interval and send them all together when the time expired?

I am sure there are lots of tutorial for this kind of topic, but I can't find what I want because I don't know the jargon for it. So I ask StackOverflow.
Here the example:
People can Like or Dislike videos on Youtube, and the database should update the counts for Like or Dislike. However, it's impractical, especially for sites like Youtube, to update the database every time a user clicked on Like / Dislike button.
How can we cache the query / count numbers at a time interval, and when the time expired we send all the queries / update the database at one time? Or any similar technique for this kind of situation?
So what you're observing is the time delay between something happening and being able to view the results of what happened.
And you're on the right path to only update periodically.
But you're on the wrong path as far as where to do the periodic updates.
Thing is you WANT to update the "database" every time ASAP (namely the database(s) responsible for writing - choose your missing corner of the CAP triangle) to capture everything pretty quickly, but for your visitors/viewers, you give them a slightly-behind (a few seconds to maybe a day, depending the situation) view of the write database(s).
You do NOT want to store this on the browser and potentially lose what the user did should the request fail, the internet go down, etc.
Slightly off topic - you typically do not try to "prematurely optimize" without data on knowing how much you're going to save by caching, buffering, etc. Optimizations like that add complexity - and you will stay sane, longer, if you keep things simple for as long as possible. Keep your design simple and optimize your bottlenecks once you know what they are.
Slightly more off topic - I'd recommend reading on distributed computing, specifically as it pertains to databases and then some design. You'll realize these highly focused abstract problems all have "solutions" with various advantages and disadvantages.

MySql multi select and grouping results. How to do it properly?

I have a website written in PHP with a search form.
This site have lots of sections and mySql tables.
The client was complaining about the results because they were not grouped, and wanted them grouped according to the site section, like below:
Results in "About" Page:
<the results>
Results in "Blog":
<the results>
... and so on.
I had to implement a quick solution for it, so I made several queries and ran them separately... and used a foreach to iterate over the results and print them.
Well, it works, but I´m not happy about it, because it is quite slow and I wonder if I'll have performance issues in the future.
I´m not a mysql genius myself and I just started my backend programmer carreer, so I wanted someone to give an idea of how I could handle this in a more professional way.
I was thinking of using a join, but I don't know how I can group the results using this approach.
Any help would be very appreciated.
I really doubt a join would help you in any way. Since you said that each of the section addresses a different table, there is no way you can join them whilst still making any sense. The best that you can do is writing these queries into one sentence and then take all the needed information in one go thus saving time on the data sending php <=> mysql since you will be executing once and returning once. Take a look here to see how you can do that. I really don't think there is anything better at all that you could improve :)
Response: Clearly, the more requests you are doing, the longer your script will perform. Have you ever tried using a command ping google.com in cmd? You see that even you send a very small amount of data, you cannot get the response faster than 30ms or so. This is the price that you pay for any request. Plus, it also the amount of data sent adds to the time. So executing it one by one you will make many unnecessary calls. Moreover you can always try it by yourself, it is not difficult to write it in either way. And output the time spent for the task. Repeat a few times. If the time spent sending data is very insignificant, you can just leave it the easy way. But keep in mind, if your application grows bigger, every lost millisecond will add up. Multiply it by one thousand requests and you could have lost a minute of your time or even more. Anyways, definitely don't go the UNION way because you will most likely lose the most time analyzing the data you received. Either
And about that function: I, myself, have never came up to need using that function so it is either me or you reading it - we would both read it from scratch. The only difference is that I knew such function existed :) Weirdly enough, there is very little information on this. C# has datasets which are very good and make it easy to handle the data. PHP is slacking behind in my opinion :/ And I am all hands in C# now, so I tend to know less and less in php. If I were you, I would just copy paste the example from the link I gave and create a reusable class for later. Hope I helped at least a little bit :)

best storage solution for data samples

i'm developing a system that will collect user activities samples (opened a window, scrolls, enter page, leave page, etc.) and i'm looking for the best way to store these samples and query it.
i prefer something smart where i can execute sql-like group by queries (for example give me all the window open events grouped by date and hour), and of course something flexible enough in case i'll need to add columns in the future.
i'm trying to avoid thinking about all the queries i might need and just save an aggregated version of the data by time, since i'd like to do drill-downs. (for example, count all the window open events by date and time, and then see all event in each time-frame, or change it to be by unique userId).
thanks.
PS - i currently use MySql for this task, but the data is expected to grow rapidly. I've experimented with mongoDB as well.
I believe mongoDB can be a good solution. First of all it's designed to hold big data and it's really easy to use and scale (replica set or sharding). Also the expression language is solid. I mean it's not as powerful as SQL, but still good enough. Here is a good link about mapping SQL command to MongoDB.
There are other alternatives, but I think or they are too complex or their expression language is not powerful enough.
Have a look at this link too, which can help you find the right solution for you.

Concurrent Users in MS Access (2007)

I am supposed to make our MS Access application work in parallel. Basically we will always at most be 3 people that need concurrent access (so from what I read this should not be too much of a problem traffic-wise).
Mostly we will all need to work on the same table (well, it's actually 3 tables, but with this access tool you can always open the sub-tables directly by clicking on the +).
I am having a hard time finding information on how to do this, so any pointers to good articles would be welcome.
Also I would like to be able to see who changed what... So implement some sort of logging.
At the moment the database lies somewhere, we download it (write that it is in use), make changes and upload it back. It's a stone-age solution and I need to change this asap.
Any help is greatly appreciated!
The easiest way is to stick the mdb/accdb file on a network drive, and make people open it from there, rather than copying it locally first. 3 concurrent users probably won't crash it too often, but make sure you take regular backups.
As for logging, well, it's easy enough to audit changes made via forms, but not so much with tables. Have a look at this thread http://forums.devarticles.com/microsoft-access-development-49/creating-audit-trail-of-all-edits-to-database-22382.html

database optimization: to use WordPress' built in tables or add my own?

I'm building a part onto a WP site that is similar to a "bidding" board. Basically it will have items with price, time, expiration, etc. Not too many fields.
Before I begin, I'm curious about the pros and cons of building these bid "items" as custom post types, allowing them to be viewable through the WP backend - but then they are all in the wp_posts table, meaning they are mixed with everything else.
Is there a big speed hit on this? In other words, should I create another wp_bids table and store ONLY bids in there? The con to this would be that the end user wouldn't have a built in way to see this through the backend (I'd have to build that system) and would take me a long time... Can someone offer some insight on this? Thanks!
Yes, there is going to be a performance difference. Doing it through WordPress' tables and interface will most likely be slower. How much slower depends on many things, and how slow it's allowed to be is only something you can decide. You may never notice the difference.
The only ways to find out would be to do it both ways and compare, or make some representative examples and see how they perform.