Ruby on rails - how to update my rails database (MYSQL) with values from csv file on a daily basis? - mysql

I have a csv file which contains very detailed data of the products that my company sells and it gets updated daily.
And I want my rails to import the data from the csv file then update my database (MYSQL) if there are any new changes found.
What's the best way to achieve this? Some people mentioned about MYSQL for excel. Would this be the way to go about it?
I will appreciate if someone can give me a guidance on this. Thank you.

I'm not gonna walk through the details, specifically because you have no details, nothing attempted at all so I'm gonna stick with a overview.
From a systems point of view I would (assuming your rails app is live and not local):
have the CSV file live in place where you (or whoever needs to) can update it and is also fetchable to the application (dropbox, s3 bucket, your own server, wtv).
have a daily cron rake task, that downloads the CSV file
parse the CSV file and decide what to update.
The trickiest part will be to decide what to update from the CSV and it will depend on how it can change itself. Like if only new lines can be added, or lines removed, if columns in lines can be changed, etc.

Related

How to Import a CSV file into an entity automatically?

Is there a way to import a CSV file into a CRM record automatically, say when the CSV file is created?
The plan is that this CSV file would have some cost center hours inside and job number which corresponds to a certain record already created in CRM.
And uploading this CSV would then update this record.
Please help me to solve this problem
You can import data both using UI and code in D365.
Also there are plenty of tools solving exactly same problem: KingswaySoft SSIS, Scribe, etc.
But it looks like buying 3rd party software might be overkill in your scenario. You can use Windows task scheduler and write a few PowerShell scripts to implement it.
Where to start:
https://learn.microsoft.com/en-us/dynamics365/customerengagement/on-premises/developer/import-data
https://learn.microsoft.com/en-us/dynamics365/customerengagement/on-premises/developer/sample-import-data-complex-data-map
https://github.com/seanmcne/Microsoft.Xrm.Data.PowerShell
https://learn.microsoft.com/en-us/dynamics365/customerengagement/on-premises/developer/define-alternate-keys-entity

How to save new Django database entries to JSON?

The git repo for my Django app includes several .tsv files which contain the initial entries to populate my app's database. During app setup, these items are imported into the app's SQLite database. The SQLite database is not stored in the app's git repo.
During normal app usage, I plan to add more items to the database by using the admin panel. However I also want to get these entries saved as fixtures in the app repo. I was thinking that a JSON file might be ideal for this purpose, since it is text-based and so will work with the git version control. These files would then become more fixtures for the app, which would be imported upon initial configuration.
How can I configure my app so that any time I add new entries to the Admin panel, a copy of that entry is saved in a JSON file as well?
I know that you can use the manage.py dumpdata command to dump the entire database to JSON, but I do not want the entire database, I just want JSON for new entries of specific database tables/models.
I was thinking that I could try to hack the save method on the model to try and write a JSON representation of the item to file, but I am not sure if this is ideal.
Is there a better way to do this?
Overriding save method for something that can go wrong or that can take more than it should is not recommended. You usually override save when changes are simple and important.
You can use signals but in your case it's too much work. You can instead write a function to do this for you but still not exactly after you saved the data to database. You can do it right away but it's too much process unless it's so important for your file to be updated.
I recommend using something like celery to run a function in the background separated from all of your django functions. You can call it on every data update or each hour for example and edit your backup file. You can even create a table to monitor the update process.
Which solution is the best is highly depended you and how important the data is. And keep in mind that editing a file can be a heavy process too so creating a backup like everyday might be a better idea anyway.

What is the best way to routinely import a CSV or XML file into a MS access database?

I have an Access database that keeps track of many different aspects of my companies performance and I would like to add functionality to keep track of the hours the employees are working.
The hours are all kept track of on a website called timetracker. They have a few reporting options including XML and CSV files. The site has a favorite report feature to get the same data in the format that I want it every week.
What I would like to do is find the best process for getting the data from this website, into a table in my database that I can reference.
I will not be the one executing whatever process I come up with and I would really like it to be as easy as possible for whoever it is that does have to do it.
Right now I have a linked table that is an XML file in our SharePoint folder. I was thinking that maybe we could just run the report and download the file every week then just save it over the old file with the correct sheet names and it should update.
What I am wondering is if anyone can come up with an easier process for doing this that would take the least amount of time and be easiest to write down instructions for that anyone could execute.
(Would it maybe be possible to create some sort of macro to actually download the report automatically?)

insert csv file into MySQL with user id

I'm working on a membership site where users are able to upload a csv file containing sales data. The file will then be read, parsed, and the data will be charted. Which will allow me to dynamically create charts
My question is how to handle this csv upload? Should it be uploaded to folder and stored for later or should it be directly inserted into a MySQL table?
Depends on how much processing needs to be done, I'd say. if it's "short" data and processing is quick, then your upload-handling script should be able to take care of it.
If it's a large file and you'd rather not tie up the user's browser/session while the data's parsed, then do the upload-now-and-deal-with-it-later option.
It depends on how you think the users will use this site.
What do you estimate the size of the files for these users to be?
How often would they (if ever) upload a file twice, can they download the charts?
If the files are small and more for one-off use you could upload it and process it on the fly, if they require repetitive access and analysis then you will save the users time by importing the data to the database.
The LOAD DATA INFILE command in MySQL handles uploads like that really nice.If you make the table you want to upload it to and then use that command it has worked great and super quick for me. I've loaded several thousand rows of data in under 5 seconds using it.
http://dev.mysql.com/doc/refman/5.5/en/load-data.html

What's the best way of backing up a rails app data?

I need to make a backup system for my rails app but this has to be a little special: It doesn't have to back up all the database info and files in a single file or folder but it has to back up the database info and attachment files per user. I mean, every one of this backups should be able to regenerate all the information and files for one single user.
My questions are:
Is this possible? What's the best way to do it? And, if it's impossible or a bad idea at all, why is it?
Note: The database is a MySQL one.
Note2: I used Paperclip for the users uploads.
Im guessing you have an app that backs up data, when a user clicks on something right? I'm thinking get all the info connected to the user(depends on how you did your user model, so maybe you should have a get_all_info method) then write it out in sql format to a file, which you save as .sql. (either using File.new or Logger.new)
I would dump the entire user object and related objects into a single xml file dump. As you go through the creation of the XML grab out all the files and write the XML + all files into one directory, then compress them.
I think there are definitely use cases to have a feature like this, but be sure to have it run in a background process and only when needed in order to not bog down the web server. Take a look at http://github.com/tobi/delayed_job or http://github.com/defunkt/resque.