Ruby process to index files on a linux filesystem - mysql

I'm developing a small photo sharing Rails app which will read and display photos from a library of photos on the local filesystem.
In order to avoid scanning the filesystem every time the user loads the page, I want to set up an hourly cron job that indexes all files and stores it in a local MySQL table.
What's the best way to scan the local filesystem and store metadata about local files (e.g. size, file type, modified date, etc..)? Is there a convenient ruby-based library? I'd also like to be able to "watch" the filesystem to know when files have disappeared since the last scan so that they can be deleted from my table.
Thanks!

You will want to look into inotify.
https://github.com/nex3/rb-inotify
You can set a watch (register a callback in the Linux kernel) on a file or a directory, and everytime something changes in that file/directory, the kernel will notify you immediately with a list of what has changed.
Common events are listed here: https://en.wikipedia.org/wiki/Inotify
You will notice that IN_CREATE + IN_DELETE are the events you are looking for.
Side note: IN_CREATE only creates the file (it's still empty), you will need to wait until IN_CLOSE_WRITE is called, to know data was finished writing to file.

Related

Using Consul for dynamic configuration management

I am working on designing a little project where I need to use Consul to manage application configuration in a dynamic way so that all my app machines can get the configuration at the same time without any inconsistency issue. We are using Consul already for service discovery purpose so I was reading more about it and it looks like they have a Key/Value store which I can use to manage my configurations.
All our configurations are json file so we make a zip file with all our json config files in it and store the reference from where you can download this zip file in a particular key in Consul Key/Value store. And all our app machines need to download this zip file from that reference (mentioned in a key in Consul) and store it on disk on each app machine. Now I need all app machines to switch to this new config at the same time approximately to avoid any inconsistency issue.
Let's say I have 10 app machines and all these 10 machines needs to download zip file which has all my configs and then switch to new configs at the same time atomically to avoid any inconsistency (since they are taking traffic). Below are the steps I came up with but I am confuse on how loading new files in memory along with switch to new configs will work:
All 10 machines are already up and running with default config files as of now which is also there on the disk.
Some outside process will update the key in my consul key/value store with latest zip file reference.
All the 10 machines have a watch on that key so once someone updates the value of the key, watch will be triggered and then all those 10 machines will download the zip file onto the disk and uncompress it to get all the config files.
(..)
(..)
(..)
Now this is where I am confuse on how remaining steps should work.
How apps should load these config files in memory and then switch all at same time?
Do I need to use leadership election with consul or anything else to achieve any of these things?
What will be the logic around this since all 10 apps are already running with default configs in memory (which is also stored on disk). Do we need two separate directories one with default and other for new configs and then work with these two directories?
Let's say if this is the node I have in Consul just a random design (could be wrong here) -
{"path":"path-to-new-config", "machines":"ip1:ip2:ip3:ip4:ip5:ip6:ip7:ip8:ip9:ip10", ...}
where path will have new zip file reference and machines could be a key here where I can have list of all machines so now I can put each machine ip address as soon as they have downloaded the file successfully in that key? And once machines key list has size of 10 then I can say we are ready to switch? If yes, then how can I atomically update machines key in that node? Maybe this logic is wrong here but I just wanted to throw out something. And also need to clean up all those machines list after switch since for the next config update I need to do similar exercise.
Can someone outline the logic on how can I efficiently manage configuration on all my app machines dynamically and also avoid inconsistency issue at the same time? Maybe I need one more node as status which can have details about each machine config, when it downloaded, when it switched and other details?
I can think of several possible solutions, depending on your scenario.
The simplest solution is not to store your config in memory and files at all, just store the config directly in the consul kv store. And I'm not talking about a single key that maps to the entire json (I'm assuming your json is big, otherwise you wouldn't zip it), but extracting smaller key/value sets from the json (this way you won't need to pull the whole thing every time you make a query to consul).
If you get the config directly from consul, your consistency guarantees match consul consistency guarantees. I'm guessing you're worried about performance if you lose your in-memory config, that's something you need to measure. If you can tolerate the performance loss, though, this will save you a lot of pain.
If performance is a problem here, a variation on this might be to use fsconsul. With this, you'll still extract your json into multiple key/value sets in consul, and then fsconsul will map that to files for your apps.
If that's off the table, then the question is how much inconsistencies are you willing to tolerate.
If you can stand a few seconds of inconsistencies, your best bet might be to put a TTL (time-to-live) on your in-memory config. You'll still have the watch on consul but you combine it with evicting your in-memory cache every few seconds, as a fallback in case the watch fails (or stalls) for some reason. This should give you a worst-case few seconds inconsistencies (depending on the value you set for your TTL), but normal case (I think) should be fast.
If that's not acceptable (does downloading the zip take a lot of time, maybe?), you can go down the route you mentioned. To update a value atomically you can use their cas (check-and-set) operation. It will give you an error if an update had happened between the time you sent the request and the time consul tried to apply it. Then you need to pull the list of machines, and apply your change again and retry (until it succeeds).
I don't see why you would need 2 directories, but maybe I'm misunderstanding the question: when your app starts, before you do anything else, you check if there's a new config and if there is you download it and load it to memory. So you shouldn't have a "default config" if you want to be consistent. After you downloaded the config on startup, you're up and alive. When your watch signals a key change you can download the config to directly override your old config. This is assuming you're running the watch triggered code on a single thread, so you're not going to be downloading the file multiple times in parallel. If the download failed, it's not like you're going to load the corrupt file to your memory. And if you crashed mid-download, then you'll download again on startup, so should be fine.

Frequently updating a large JSON file on Amazon S3 and potential write conflict

I first want to give a little overview on what I'm trying to tackle. My service is frequently fetching posts from various sources such as Instagram, Twitter, etc. and I want to store the posts in one large JSON file on S3. The file name would be something like: {slideshowId}_feed.json
My website will display the posts in a slideshow, and the slideshow will simply poll the S3 file every minute or so to get the latest data. It might even poll another file such as {slideshowId}_meta.json that has timestamp from when the large file changed in order to save bandwidth.
The reason I want to keep the posts in a single JSON file is mainly to save cost. I could have each source as its own file, e.g. {slideshowId}_twitter.json, {slideshowId}_instagram.json, etc. but then the slideshow would need to send GET request to every source every minute, thus increasing the cost. We're talking about thousands of slideshows running at once, so the cost needs to scale well.
Now back to the question. There may be more than one instance of the service running that checks Instagram and other sources for new posts, depending on how much I need to scale out. The problem with that is the risk of one service overwriting the S3 file while another one might
already be writing to it.
Each service that needs to save posts to the JSON file would first have to GET the file, process it and check that the new posts are not duplicated in the JSON file, and then store the new or updated posts.
Could I have each service write the data to some queue like the Simple Queue Service
(SQS) and then have some worker that takes care of writing the posts to the S3 file?
I thought about using AWS Kinesis, but it just processes the data
from the sources and dumps it to S3. I need to process what has been
written to the large JSON file as well to do some book keeping.
I had an idea of using DynamoDB to store the posts (basically to do the book keeping), and
then I would simply have the service query all the data needed for a
single slideshow from DynamoDB and store it to S3. That way the services would simply send the posts to DynamoDB.
There must be some clever way to solve this problem.
Ok for your use case
there are many users for a single large s3 file
the file is updated often
the file path (ideally) should be consistent to make it easier to get and cache
the s3 file is generated by a process on a ec2 and updated once per minute
If the GET rate is less than 800 per second then AWS is happy with it. If not then you'll have to talk to them and maybe find another way. See http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
The file updates will be atomic so there are no issues with locking etc. See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
Presumably if a user requests "during" an update they will see the old version. This behaviour is transparent to both parties
File updates are "eventually" consistent. As you want to keep the url the same you will be updating the same object path in s3.
If you are serving across regions then the time it takes to become consistent might be an issue. For the same region it seems to take a few seconds. AWS don't seem to be very open about this, so it's probably best to test it for your use case. As your file is small and the updates are per 60 seconds then I would imagine it would be ok. You might have to assume in your API description that updates actually happen over a greater time than 60 seconds to take this into account
As ec2 and s3 run on different parts of the AWS infrastructure (ec2 in a VPC and s3 behind a public https) You will pay for transfer costs from ec2 to s3
I would imagine that you will be serving the s3 file via the s3 "pretend to be a website" feature. You will have to configure this too, but that is trivial
This is what I would do:
The Kinesis stream would need to have enough capacity to handle writes from all your feed producers. For about 25/month you get to do 2000 writes per second.
Lambda would be simply fired whenever there is enough new items on your stream. You can configure trigger to wait for 1000 new items and then run the Lambda to read all new items from the stream, process them and write them to REDIS (ElastiCache). Your bill for that should be well under 10/month.
Smart key selection would take care of duplicate items. You can also set the items to expire if you need to. According to your description your items should definitely fit into memory and you can add instances if you need more capacity for reading and/or reliability. Running two REDIS instances with enough memory to handle your data would cost around 26/month.
Your service would use REDIS instead of S3, so you would only pay for the data transfer and only if your service is not on AWS (<10/month?).

EC2 suitability for synching large CSV files from an FTP

I have to execute a task twice per week. The task consists on fetching a 1.4GB csv file from a public ftp server. Then I have to process it (apply some filters, discard some rows, make some calculations) and then synch it to a Postgres database hosted on AWS RDS. For each row I have to retrieve a SKU entry on the database and determine wether it needs an update or not.
My question is if EC2 could work as a solution for me. My main concern is the memory.. I have searched for some solutions https://github.com/goodby/csv which handle this issue by fetching row by row instead of pulling it all to memory, however they do not work if I try to read the .csv directly from the FTP.
Can anyone provide some insight? Is AWS EC2 a good platform to solve this problem? How would you deal with the issue of the csv size and memory limitations?
You wont be able to stream the file directly from FTP, instead, you are going to copy the entire file and store it locally. Using curl or ftp command is likely the most efficient way to do this.
Once you do that, you will need to write some kind of program that will read the file a line at a time or several if you can parallelize the work. There are ETL tools available that will make this easy. Using PHP can work, but its not a very efficient choice for this type of work and your parallelization options are limited.
Of course you can do this on an EC2 instance (you can do almost anything you can supply the code for in EC2), but if you only need to run the task twice a week, the EC2 instance will be sitting idle, eating money, the rest of the time, unless you manually stop and start it for each task run.
A scheduled AWS Lambda function may be more cost-effective and appropriate here. You are slightly more limited in your code options, but you can give the Lambda function the same IAM privileges to access RDS, and it only runs when it's scheduled or invoked.
FTP protocol doesn't do "streaming". You cannot read file from Ftp chunks by chunk.
Honestly, downloading the file and trigger run a bigger instance is not a big deal if you only run twice a week, you just choose r3.large (it cost less than 0.20/hour ), execute ASAP and stop it. The internal SSD disk space should give you the best possible I/O compare to EBS.
Just make sure your OS and code are deployed inside EBS for future reuse(unless you have automated code deployment mechanism). And you must make sure RDS will handle the burst I/O, otherwise it will become bottleneck.
Even better, using r3.large instance, you can split the CSV file into smaller chunks, load them in parallel, then shutdown the instance after everything finish. You just need to pay the minimal root EBS storage cost afterwards.
I will not suggest lambda if the process is lengthy, since lambda is only mean for short and fast processing (it will terminate after 300 seconds).
(update):
If you open up a file, the simple ways to parse it is read it sequentially, it may not put the whole CPU into full use. You can split up of CSV file follow reference this answer here.
Then using the same script, you can call them simultaneously by sending some to the background process, example below show putting python process in background under Linux.
parse_csvfile.py csv1 &
parse_csvfile.py csv2 &
parse_csvfile.py csv3 &
so instead single file sequential I/O, it will make use of multiple files. In addition, splitting the file should be a snap under SSD.
So I made it work like this.
I used Python and two great libraries. First of all I created a Python code to request and download the csv file from the FTP so I could load it to the memory. The first package is Pandas, which is a tool to analyze large amounts of data. It includes methods to read files from a csv easily. I used the included features to filter and sort. I filtered the large csv by a field and created about 25 new smaller csv files, which allowed me to deal with the memory issue. I used as well Eloquent which is a library inspired by Laravel's ORM. This library allows you to create a connection using AWS public DNS, database name, username and password and make queries using simple methods, without writing a single Postgres query. Finally I created a T2 micro AWS instance, installed Pandas and Eloquent updated my code and that was it.

Storing image in database vs file system (is this a valid use case?)

I have an application where every user gets there own database and runs from the same file system folder. (the database is determined by sub domain)
Storing in the filesystem could lead to conflict. I'd imagine the images upload would be small. (I would scale them down before storing)
Is it ok in this case to store in database?
(I know this has been asked a lot)
I also want to make my application easy to install and creating a writable folder is hard for some people)
To take the contrary view from Nathanial -- I find it easier to use the data base to store opaque data like images. When you back up the data base, you automatically get a backup of the images. Also, you can retrieve, update, or delete the image along with all the other data in integrated SQL queries; keeping the files separately means writing much more complex code that has to go out to the file system to maintain data integrity every time you issue certain SQL queries. Locking can be a big problem, and transaction processing (especially rollback) even bigger.
Seems like you've already sort of talked yourself into it, but in my experience it's better to store files in a filesystem and data in a database. Use GUID's for the file names if you are worried about a conflict.
Pasting my answer from a similar post: I have implemented both solutions (file system and database-persisted images) in previous projects. In my opinion, you should store images in your database. Here's why:
File system storage is more complicated when your app servers are
clustered. You have to have shared storage. Even if your current
environment is not clustered, this makes it more difficult to scale
up when you need to
You should be using a CDN for your static
content anyways, and set your app up as the origin. This means that
your app will only be hit once for a given image, then it will be
cached on the CDN. CloudFront is dirt cheap and simple to set
up...there's no reason not to use it. Save your bandwidth for your
dynamic content.
It's much quicker (and thus cheaper) to develop
database persisted images
You get referential integrity with
database persisted images. If you're storing images on the file
system, you will inevitably have orphan files with no matching
database records, or you'll have database records with broken file
links. This WILL happen...it's just a matter of time. You'll have to
write something to clean these up.
Anyways, my two cents.

uploaded files - database vs filesystem, when using Grails and MySQL

I know this is something of a "classic question", but does the mysql/grails (deployed on Tomcat) put a new spin on considering how to approach storage of user's uploaded files.
I like using the database for everything (simpler architecture, scaling is just scaling the database). But using the filesystem means we don't lard up mysql with binary files. Some might also argue that apache (httpd) is faster than Tomcat for serving up binary files, although I've seen numbers that actually show just putting Tomcat on the front of your site can be faster than using an apache (httpd) proxy.
How should I choose where to place user's uploaded files?
Thanks for your consideration, time and thought.
I don't know if one can make general observations about this kind of decision, since it's really down to what you are trying to do and how high up the priority list NFRs like performance and response time are to your application.
If you have lots of users, uploading lots of binary files, with a system serving large numbers of those uploaded binary files then you have a situation where the costs of storing files in the database include:
Large size binary files
Costly queries
Benefits are
Atomic commits
Scaling comes with database (though w MySQL there are some issues w multinode etc)
Less fiddly and complicated code to manage file systems etc
Given the same user situation where you store to the filesystem you will need to address
Scaling
File name management (user uploads same name file twice etc)
Creating corresponding records in DB to map to the files on disk (and the code surrounding all that)
Looking after your apache configs so they serve from the filesystem
We had a similar problem to solve as this for our Grails site where the content editors are uploading hundreds of pictures a day. We knew that driving all that demand through the application when it could be better used doing other processing was wasteful (given that the expected demand for pages was going to be in the millions per week we definitely didn't want images to cripple us).
We ended up creating upload -> file system solution. For each uploaded file a DB meta-data record was created and managed in tandem with the upload process (and conversely read that record when generating the GSP content link to the image). We served requests off disk through Apache directly based on the link requested by the browser. But, and there is always a but, remember that with things like filesystems you only have content per machine.
We had the headache of making sure images got re-synchronised onto every server, since unlike a DB which sits behind the cluster and enables the cluster behave uniformly, files are bound to physical locations on a server.
Another problem you might run up against with filesystems is folder content size. When you start having folders where there are literally tens of thousands of files in them, the folder scan at the OS level starts to really drag. To avert this problem we had to write code which managed image uploads into yyyy/MM/dd/image.name.jpg folder structures, so that no one folder accumulated hundreds of thousands of images.
What I'm implying is that while we got the performance we wanted by not using the DB for BLOB storage, that comes at the cost of development overhead and systems management.
Just as an additional suggestion: JCR (eg. Jackrabbit) - a Java Content Repository. It has several benefits when you deal with a lot of binary content. The Grails plugin isn't stable yet, but you can use Jackrabbit with the plain API.
Another thing to keep in mind is that if your site ever grows beyond one application server, you need to access the same files from all app servers. Now all app servers have access to the database, either because that's a single server or because you have a cluster. Now if you store things in the file system, you have to share that, too - maybe NFS.
Even if you upload file in filesystem, all the files get same permission, so any logged in user can access any other's file just entering the url (Since all of them get same permission). If you however plan to give each user a directory then a user permission of apache (that is what server has permission) is given to them. You should su to root, create a user and upload files to those directories. Again accessing those files could end up adding user's group to server group. If I choose to use filesystem to store binary files, is there an easier solution than this, how do you manage access to those files, corresponding to each user, and maintaining the permission? Does Spring's ACL help? Or do we have to create permission group for each user? I am totally cool with the filesystem url. My only concern is with starting a seperate process (chmod and stuff), using something like ProcessBuilder to run Operating Systems commands (or is there better solution ?). And what about permissions?