I have a laravel project that is supposed to run in a localhost.
but we needed to add the ability to do some modification while user of the app is away from his pc that the app runs on it's host.
I know i can deploy the whole project in an online server but this solution is not an option till now.
we have only a weak online server (it's slower a lot than localhost);
so we can use this weak online server for these situations when the user wants to do some modifications remotely which would happen from time to time almost two or three times a day while the localhost will have the heavy work of the rest of the day which may be more than 3 or 4 hundreds processes a day.
i can't make the whole load on the online server while it's very slow like that and we don't need online benefits a lot, just for those two or three times remote modifications that the app user may or may not need, so i can't trade off localhost speed for online benefits which i need only two or three times a day.
what solution can i do.
i knew about master-slave and master-master replication but it's not an option too.
is there any ideas and thank you in advance.
-------------- about the two environments (local and online)------------
local is windows running xamp stack (apache, mysql, php)
server is linux (don't know actually which distro but any way i can't install any tools there ... just php packages with composer)
I had the same problem for uploading my laravel project
just use FileZilla to upload your project, even with the worst internet speed you can do it.
and save yourself the trouble.
To answer your question, If I were you, I will create a sync_data table in the application. And the sync data table will have the responsibility to record the changes occurring for various entities.
For example, if you change customer data in the localhost, you will save an entry to sync_data like type=customer, id=10, action=update, synced=No. And using a cron you can push these updates -fetching the customer record by saved id- to your online server in regular intervals, which will not make your online server busy. Furthermore, your online users will have the latest data at least.
Related
I am trying to build a way to log all the times my friends and I are on our video game server. I found an api which returns a JSON file of online players given a server’s IP address.
I plan to receive a new JSON file every 30 seconds and then log player gaming sessions by figuring out when they get on and when they are no longer off.
The problem is that this is my first time using a database like this for my websites. I want to use (and will use) Golang to retrieve the JSON file and update my MySQL database of player logs.
PROBLEM: I have no freaking clue how to have my Golang file run every 30 seconds to update my database. I can get a simple program to grab data and update a local database easily, but I’m lost on how to get this to run on my website and to have it run every 30ish seconds 24/7. I’m used to CRUD with simple html forms and other things related to user input, but I’ve never thought of changing my database separately from website interactions
Is there a standard solution to my problem? Am I thinking about this all wrong? Do I make sense? Does God really exist!!?
I’m using BlueHost
I insist on Golang for the experience
first time using stackoverflow idk if my question was too long
You need a UNIX/Linux subsystem known as cron to do this. cron lets you rig a program to run at a specific interval on your machine.
Bluehost’s cron support lets you run php scripts. I don’t believe they support running other kinds of programs.
If you have some other always-on machine that runs your golang program, you can run it there. But, to do that, you will have to configure Bluehost to allow an external connection to your MySQL server so that program can connect. Ask their customers support people about that.
Pro tip: every 30 seconds may be too high an update frequency to use on a shared service like Bluehost. That kind of frequency is tricky to manage even if you completely control the server.
I want to upgrade my appserv mysql instalation from 5.0.x to 5.x.
I have some tables and views relationed with various web proyects and VB.net aplications in that.
Any body can help me to do that without data loss?
(Putting this in an answer as it's too long for a comment)
NB - I've not used AppServ so this answer is generic
The versions of software within AppServ appear to be old. Very old. MySQL 5.0.51b, PHP 5.2.6, Apache 5.2.8 are way behind with regards to security and features. The best thing you can do is to replace the whole stack with a newer one
If you do a quick Google search for WAMP installer, a plethora of available stacks are listed. The first one in the list uses MySQL 5.6.17, PHP 5.5.12, Apache 2.4.9. Again, not the newest, but much more recent and feature rich. It's also available in 32 and 64 bit versions
The first thing to do is to download a virtual machine system. (VirtualBox is a pretty simple one to get to grips with and runs on a variety of platforms). This is so that you can practise.
Spool up an instance of Windows (which is as close as possible to your live setup) and install your current version of AppServ and your applications which use it, take a snapshot (so you can roll back) and then work out slowly how to update to a new stack. Take lots of snapshots as you go.
You need to make note of your MySQL data directories and back up your Apache, MySQL and PHP configurations
It will take time to iron out the bugs and problems you find along the way. Do not be downhearted.
Once you have worked out how to update your stack without data loss, try your applications on the virtual machine. There is no point in upgrading your stack if your software is going to bomb out the second it start to run.
Once you're satisfied what all the steps you need are, roll back to the snap shot you took at the start and go through all the steps again. Then again. And keep on restoring/upgrading it until you are confident that you can do the update with the minimum of fuss and panic on the live system
I would recommend doing your update over two sessions. For both sessions, choose a quiet time to do it. Essentially, out of office hours is the best, early morning (after a good sleep) is even better.
During the first session (SESSION-1) the server offline, backup everything, then return the server to live. And when I say "backup everything", I mean EVERYTHING! Take this backup and restore it to a virtual machine. Go through your steps that you worked out before on this restored version to make sure everything is going to work. Make a note of anything that is different to the steps you worked out earlier.
When you've done your testing, you can do session two (SESSION-2). Again, take the server offline, run a differential backup on the system and a full backup of the MySQL databases. Update your WAMP stack (using the steps you worked out in SESSION-1) and bring it back online. Check that all your URLs and code still works.
After you've completed your checks, send triumphant emails to whoever needs to know, put a smug smile on your face for a job well done, pour yourself a large glass of whiskey (other drinks are available) and relax - you've earned it
Sorry that I can't give you definitive steps but I use Linux for all my PHP stacks so these steps are what I would do if I was upgrading them. I spent 3 months practising upgrading my servers then did all of them in a single night (I have separate MySQL servers so it was only the Apache/PHP side I was updating - much easier and quicker)
Hopefully some of this helps. Good luck
So Have a web application that has 10-12 pages with many POST/ GET DB Calls. We usually have a apache crash/other problem when site traffic results to 1000 or so (concurrent users) which is very small number, we have updated server with good RAM and resources. When our system admin guy do load testing on blitz and other custom script and is suggesting to move away from Apache. Some things does not make sense to me. Like Apache is not too bad to handle few thousand of concurrent users considering we have cloudflare for caching. Here is what he suggested:
replacement of Apache+mod_fcgi with Nginx+php-fpm which can make the server handle much more users, and then test it.
or
2. For testing: Need 10-20 servers to run a scenario from. Basically, what is needed is a more complex blitz.io analogue. create one server, which takes all those hours, then just clone it in the cloud and pay for about 1 hour of testing multiplied by the number of servers needed.
Once again there are many DB calls anf HT access. ALso what makes Nginx better than apache in this case?
I would check this comparison first. Basically, nginx is event based, so it's able to handle more requests concurrently. However, as the MySQL DB seems to be the choke point here, it's very possible that nginx wouldn't solve all your problems. Perhaps moving to a NoSQL kind of database, that's better at scaling horizontally, would help (if that's feasible).
I have a website using cPanel on a dedicated account, I would like to be able to automatically sync the website to a second hosting company or perhaps to a local (in house ) server.
Basically this is a type of replication. The website is database driven (MySQL), so Ideally it would sync everything (content, database, emails etc.) , but most importantly is syncing the website files and its database.
I'm not so much looking for a fail-over solution as an automatic replication solution, so if the primary site (Server) goes off-line, I can manually bring up the replicated site quickly.
I'm familiar with tools like unison and rsync, but most of these only sync file(s) and do not do too well with open database connections.
Don't use one tool when two is better; Use rsync for files, but use replication for MySQL.
If, for some reason, you don't want to use replication, you might want to consider using DRBD. This is of course only applicable if you're running Linux. DRBD is now part of the main kernel (since version 2.6.33).
And yes - I am aware of at least one large enterprise deployment of DRBD which is used, among other things, store MySQL database files. In fact, MySQL website even has relevant page on this topic.
You might also want to Google for articles against DRBD/MySQL combination; I remember reading few posts of that.
I'm working on a SaaS project and mysql is our main database. Our applications is written on c# .net and runs under an windows 2003 server.
Considering maintainance, cost, options and performance, which server plattaform can I decide for MySQL hosting, windows or Unix/Linux/Ubuntu/Debian?
The scenario is as following:
The server I run today has a modarate transaction volume. Databases increase 5MB daily and we expect to increase 50MB in couple of months and it is mission critical.
I don't know how big the database is going to be. We rent a VPS to host application and database server.
Most of our queries are simple but our ORM Tool makes constantly use of subqueries. Also we run reports simple and heavy ones. Some them runs after user click, but most runs in order to the queue.
Buy an extra co-lo space will be nice as we got more clients. That's SaaS project after all.
When developing, you can use your Windows box to also run a MySQL server. If and when you
want to have your DBMS in a separate server it can be in either a Windows or Linux server.
MySql and supporting tools for backup etc probably have more choices in Linux.
There are also 3rd party suppliers who will host your MySQL database on their servers. The benefit is they will handle backups, maintenance etc.
Also: look into phpMyAdmin for use as a great admin tool.
Larry
I think you need more information to make an informed decision. It's hard to just pull out a "best" answer based on no specific information.
What is your expected transaction volume?
How big will the database get?
How complex are your queries, ie are they long running or relatively quick?
Are you hosting the application on your own server at your own location? If you have to buy extra co-lo space maybe an extra server isn't the best option.
How "mission critical" is this database? Ie maybe you need replicated servers to ensure stability.
There is a server sizing tool online at http://www.sizinglounge.com/, so you should check that out. It sounds like your server could be smaller than their smallest tier, but it should be a good place to start.
If this is a mission critical application you need to do some kind of replication to an extra server in case the primary one fails, so you are definitely looking at two systems. This has to be in addition to a good backup plan.
Given that you are uncertain about how big it could get you might just continue renting a server. For your backup one idea would be to look at running MySQL on an Amazon EC2 instance. BTW it is important to have a remote replicated server. If you have two systems next to each other and an environmental problem comes up, they could both be out of commission at the same time. But with a remote copy your options are open to potentially working around it.
If you run a lot of read-only queries locally and have your site hosted somewhere, it might make sense to set up a local replicated database copy to query against. That could potentially improve both your website and local performance quite a bit. Plus it would give you some good piece of mind having a local copy under your control.
HTH,
Brandon