I have a bunch of devices that process images from cameras. Many cameras can be connected to a single device. Device has an unique serial number.
Each image is stored on local filesystem and associated metadata is written to local database (with path to the file).
I'd like to move data from the device to external server. This means that files are moved to media server and new rows from database are moved to the external database.
That external database should aggregate data from many local ones.
I'd like also to synchronize both replication, this means that I don't want a new row to appear in the external database before file is accessible on media server.
Are there any solutions that allows for such synchronized replication or do I need to write some service that will do that?
I am open to suggestions on file transfer protocol, but database is created in MySQL.
Before you send the data from external server, first convert it the images to BASE64 and in the server save it in physical path, after validate if exists the image and save it in the data base.
Related
Suppose the users of my website uploads a pdf on my website which is live on the internet. Then is there a way that those files after being uploaded gets stored in my mysql database on my system(laptop) directly.
To refine more, would it matter if one uses mySql database on his local system(localhost) or on a live website to store data? , will the database fail to store data if the website is hosted online?
If the question is not clear to anyone in any sort please mention.
Thank you.
There are a lot of nuances to your question, and I'll try to address as many of them as I can.
I would not store files directly in the database. You certainly can, but in general you're going to get better performance and other ancillary benefits from storing files as a file in the file system. Store metadata in the database, including at the very least the file name and path on disk (perhaps you want to store more, like the uploader's account information, the size, a long-form text description, and so on, but at least store the path and filename). Then, in your application, fetch the filename from the database and serve the file instead of a database BLOB. One reason is that MySQL performance can really suffer if you don't do this properly.
Let's say you decide to defy my suggestion and store the file as some BLOB data in your database, how can you replicate that to your laptop? Your laptop isn't going to be powered on and connected to the internet all the time, in any case even if you had a server at home running 24 hours a day your hosting provider still should have better uptime than your home does. What should happen to the upload if you were hosting the database on your laptop, but your laptop was off (or rebooting for system updates)? So you should host the database at the hosting provider and somehow sync it to your local machine. MySQL provides several methods of this; replication, export and import of .sql files, or exporting binary logs. These each have tradeoffs that you'll want to consider depending on your needs.
But remember how I said you can get other ancillary benefits from storing the file on the file system directly? One of those is that you can rely on file transfer techniques to get the file to your local machine. SFTP, SCP, SyncThing, WebDAV, and any other way you can imagine transferring files can be used to get the remote file to your local system. You wouldn't automatically get the database metadata, but that didn't seem like much of a requirement from your question, so you'd have easy access to the file as uploaded, as quickly as you want.
So there are plenty of ways to accomplish this, and without more details on your question it's tough to recommend a solution, but you have plenty of options available.
I have approximately 4 GB(34,000) of JPEG files that I need to store in MySQL table. Each image is of different date varying from 1-jan-1961 to 31-dec-2007. How should I store these files such that when I enter the specific date between this time interval the corresponding image appears in my localhost server. The MySQL table has following schema ID, date(that is being entered by the user end), file name, type, size.Is there any way I can upload these files(images) in chunk and not one by one.
Always use mySQL client to do the bulk uploads, You can use the native mysql client or a PHP client. However all these years i didn't have to save a image in MySQL. It is hard to manage and have bad effect on the DB performance.
I recommend you to keep only a file URL in the database and have the files elsewhere, it can be local or some other image host. However with this you need to take care of some stuff of your own
Backing up the images separately when you take MySQL backup, as
images are no longer in DB
Handling transactions and rollbacks
Handling deletes
If you can manage with your code, I suggest you to move your images out form the database
First, PHPMyAdmin is not a good choice for end-user GUI, it's preferably used as SQL Web client for DB developers and administrators.
Second, you should not use MySQL to store images, SQL manipulation of big data blobs is quite inefficient compared to direct filesystem access. This point has been debated many times on this site :
Can I store images in MySQL
Images in MySQL
Storing images in MySQL
Storing Images in DB - Yea or Nay?
etc.
You should instead :
use any file transfer command to upload all your files to your web server: rsync, scp, ftp, etc. I'd recommend rsync for further updates and syncing.
use any server side scripting language (PHP, Python, etc.) to parse your uploaded files and reference them in a metadata only table
build a simple HTML GUI with the same language to give access to the wanted images.
Hope it helps.
I want to build a desktop application somewhat like a POS. The user can enter data to it and save the data to a local database instead of accessing a remote database in the server. The reason I want to do this is to reduce the traffic and make my application more responsive since it will make less overhead of accessing a remote database.
I want to build at least 5 client of this desktop application and each of them has a local database. Along with these clients, I will setup a server database which I will use for reports or for online access that displays all the status and data of all my clients.
For example, a specific user uses the client machine, all of his data will be stored in the local database before it can be transported to the server database for synchronization. I would seem like this system doesn't give a real time data update to the server but this is what I need. Since the server database is only used for reporting purposes, there is no information in the server that is manipulated by a client.
You may looking for jumpmind product called SymmetricDS. It Sync data from branch offices with a central office, sharing data with all offices and syncing subsets of data with specific offices.
I am creating a WP8 App.
I have a created a sqlite database in the isolated storage.
Now my data keeps updating and I want to regularly download the latest data from the server database and update the local database.
The database in the WP8 cannot be changed at the client side so there will be only 1 side data merging.
Which is the best way and service to use?
If you do not work with a large database, you might prefer to replace the device database and not worry about merging. This can be as simple as making an export of the server database, transferring it to the device and then importing it into the device database. The appropriate method of dumping the database on the server side is dependent on the type of database (e.g. mysqldump in the case of MySQL).
If you do work with a large database, or if you are struggling with bandwidth issues on the device, you might want to use a technique to detect differences. One of the easiest methods is change tracking on the database. All modifications can then be logged with an change_at timestamp. The device can then remember which is the last modification it contains, get the new entries, and replicate the changes locally (For in-depth detailed explanation, please provide more information of the server environment and data structure).
I know this is something of a "classic question", but does the mysql/grails (deployed on Tomcat) put a new spin on considering how to approach storage of user's uploaded files.
I like using the database for everything (simpler architecture, scaling is just scaling the database). But using the filesystem means we don't lard up mysql with binary files. Some might also argue that apache (httpd) is faster than Tomcat for serving up binary files, although I've seen numbers that actually show just putting Tomcat on the front of your site can be faster than using an apache (httpd) proxy.
How should I choose where to place user's uploaded files?
Thanks for your consideration, time and thought.
I don't know if one can make general observations about this kind of decision, since it's really down to what you are trying to do and how high up the priority list NFRs like performance and response time are to your application.
If you have lots of users, uploading lots of binary files, with a system serving large numbers of those uploaded binary files then you have a situation where the costs of storing files in the database include:
Large size binary files
Costly queries
Benefits are
Atomic commits
Scaling comes with database (though w MySQL there are some issues w multinode etc)
Less fiddly and complicated code to manage file systems etc
Given the same user situation where you store to the filesystem you will need to address
Scaling
File name management (user uploads same name file twice etc)
Creating corresponding records in DB to map to the files on disk (and the code surrounding all that)
Looking after your apache configs so they serve from the filesystem
We had a similar problem to solve as this for our Grails site where the content editors are uploading hundreds of pictures a day. We knew that driving all that demand through the application when it could be better used doing other processing was wasteful (given that the expected demand for pages was going to be in the millions per week we definitely didn't want images to cripple us).
We ended up creating upload -> file system solution. For each uploaded file a DB meta-data record was created and managed in tandem with the upload process (and conversely read that record when generating the GSP content link to the image). We served requests off disk through Apache directly based on the link requested by the browser. But, and there is always a but, remember that with things like filesystems you only have content per machine.
We had the headache of making sure images got re-synchronised onto every server, since unlike a DB which sits behind the cluster and enables the cluster behave uniformly, files are bound to physical locations on a server.
Another problem you might run up against with filesystems is folder content size. When you start having folders where there are literally tens of thousands of files in them, the folder scan at the OS level starts to really drag. To avert this problem we had to write code which managed image uploads into yyyy/MM/dd/image.name.jpg folder structures, so that no one folder accumulated hundreds of thousands of images.
What I'm implying is that while we got the performance we wanted by not using the DB for BLOB storage, that comes at the cost of development overhead and systems management.
Just as an additional suggestion: JCR (eg. Jackrabbit) - a Java Content Repository. It has several benefits when you deal with a lot of binary content. The Grails plugin isn't stable yet, but you can use Jackrabbit with the plain API.
Another thing to keep in mind is that if your site ever grows beyond one application server, you need to access the same files from all app servers. Now all app servers have access to the database, either because that's a single server or because you have a cluster. Now if you store things in the file system, you have to share that, too - maybe NFS.
Even if you upload file in filesystem, all the files get same permission, so any logged in user can access any other's file just entering the url (Since all of them get same permission). If you however plan to give each user a directory then a user permission of apache (that is what server has permission) is given to them. You should su to root, create a user and upload files to those directories. Again accessing those files could end up adding user's group to server group. If I choose to use filesystem to store binary files, is there an easier solution than this, how do you manage access to those files, corresponding to each user, and maintaining the permission? Does Spring's ACL help? Or do we have to create permission group for each user? I am totally cool with the filesystem url. My only concern is with starting a seperate process (chmod and stuff), using something like ProcessBuilder to run Operating Systems commands (or is there better solution ?). And what about permissions?