Is it a good idea to manage database with csv files? - mysql

I'm working on a feature that imports users into a MySQL database.
The initial goal of this feature was to add new users from CSV files.
Gradually my client wants more of this tool. Indeed, if the CSV file contains a row with an already existed user, the data will be updated (the feature erases the old data with the new). So we implement it.
After that, he wants to update users (i.e. remove data, add data et cetera) but he already has a man/machine interface to do that.
I feel we're going in the wrong way.
What do you think about this ? Is it a good idea to manage database with csv files ?

I can see no problem using CSV to do that. You need to define a clear file format which specify object type and action, like for example :
<object type>;<action>;<value1>;<value2>;etc…
So you can have
user;add;Bob;Stone;fr
then
user;update;Bob;Stone;uk
then
user;del;Bob;Stone
etc…

Related

Save html table data locally with electron

I'm new to coding and its a lot of try and error. Now I'm struggling with html tables.
For explanation: I am building an Electron Desktop application for stocks. I am able to input the value via GUI in an html table, and also export this as Excel file. But, every time I reload the app, all data from the table are gone. It would be great to save this data permanently, and simply add new data to the existing table after an application restart.
What's the best way to achieve this?
In my mind, it would the best way to overwrite the existing Excel file with the new work (old and new data from the table), because it would be easy to install the tool on a new PC and simply import the Excel file to have all data there. I don't have access to a web server, so I think a local Excel file would be better than a php solution.
Thank you.
<table class="table" id="tblData" >
<tr>
<th>Teilenummer</th>
<th>Hersteller</th>
<th>Beschreibung</th>
</tr>
</table>
This is the actual table markup.
Your question has two parts, it seems to me.
data representation and manipulation
data persistence
For #1, I'd suggest taking a look at Tabulator, in particular its methods of importing and exporting data. In my projects, I use the JSON format with Tabulator and save the data locally so it persists between sessions.
So for #2, how and where to save the data? Electron has built-in methods for getting the paths to common user directories. See app.getPath(name). Since it sounds like you have just one file to save, which does not need to be directly accessible to the user, appData is probably a good place to store it.
As for the "how" to store it – you can just write a file to that path using Node fs, though I like fs-jetpack too. Tabulator can save data as well.
Another way to store data is with electron-store. It works very well, though I've only used it with small amounts of data.
So the gist is that when your app starts, it loads the data and when the app quits, it saves the data, along with any changes which have been made, though I'd suggest saving after every change.
So, there are lots of options depending on your needs.

"POST" form data to XLS/CSV

So I'm trying to essentially "POST" data from a form in an offline webpage, to an Excel spreadsheet, or CSV, or even just a TXT file. Now I have seen this to be possible using ActiveX in Internet Explorer, however, the methods I saw were pretty particular to the user's code, so I got a bit lost in translation being a beginner. Also some recommended using an offline database using JS, but I'm not sure where to begin with that.
Can anyone offer some insight on this? Is it possible? What would be the best route to take?
There are many ways to accomplish this. The best solution will be the one that suits your specific requirements. Obviously, creating a text/csv file is easier than creating an xls. That said, the basic psuedo code is as follows:
Collect form data
Create (in-memory or temporary) file from collected form data.
Return file as download to client, or just save to some location, OR (best option) insert a row into a database.

import csv file

I need to pull data from csv file to SQL Server table. Which Control task should I use ? Is it Flat File ? What is the correct method to pull data ?
The problem is I have used Flat File Task for pulling csv file. But the csv file whihc I am having, contains headings as first row, then on the third row, I have the columns, and data starting from fifth row.
Another problem is, in this file column details comes again after 1000 data ie columns appears in two rows. Is it possible to pull data ? If so, HOW ?
While Valentino's suggestion should work, I suggest that first you work with the provider of the file to get them to provide the data in a better format. When we get stuff like this we almost always push it back and ask for properly formatted data. We get it too about 90% of the time. It will save you work if they will fix their own drek. In our case, the customers providing the data are paying for our programming services and when they understand how substantial an increase in the cost to them, they are usually nmore than willing to accomodate our needs.
I believe you'll first have to transform your file into a proper CSV file so that the SSIS Flat File Source component (Data Flow) can read it. If the source system cannot produce a real CSV file, we usually create custom .NET applications for the cleanup/conversion task.
An Execute Process task (Control Flow) that executes the custom app can then be called prior to the Data Flow.

csv file upload

In my Grails app, I would like admin users to be able to upload a CSV file that contains data such as:
List of users to be added to system
List of groups to be added to system
Assignment of users to groups
I have no idea how the user will generate these CSV files - most likely from Excel, Access or similar, and therefore I've no way of knowing which column will contain which data. So I'm planning to allow the user to specify which column contains users, groups, etc.
I'm wondering if there's a JavaScript component that could help with this. Ideally I'd like to implement the following:
User uploads file
In browser, user is shown first N lines of uploaded file and prompted to select the column that contains the users, groups, etc.
Column information is uploaded to server
Is there a client/server side component that could help with this, or an entirely different approach which would be superior to that outlined above?
I should emphasise that the users of this system will not be technically gifted, so expecting them to provide an XML/JSON file instead is out of the question (and you can definitely forget about asking them to call a Web Service instead of uploading a file).
Thanks,
Don
I like your solution so far, given that the users are non-technical, and that you want to be able to accept this data as a file upload, rather than have the users enter it directly into your application.
I would simply suggest that when the user uploads the file, the server returns the first five (or so) lines back to the client as an HTML table. Then you can have <select> drop-downs as the headers for each column, with the pre-set options you're looking for. You can validate that the user has assigned all available options to each column (use JS to remove options from the select as they use them, but be sure to provide a method to undo and change selections), and allow some columns not to be labeled (which the server will just ignore when parsing the file.
If possible, also illustrate (perhaps in a graph format or just an example sentence, if applicable) how their label choices will apply to the relationships. For example, "New user ABC will be a member of new group XYZ." If ABC and XYZ are unexpectedly backwards, the user will recognize they made a mistake.
Also, some users will inevitably upload a file where they used rows as columns and columns as rows. Either provide a GUI function to reverse this ("rotate" the table), or let them choose which axis to label.
I would also suggest providing your users with a collection of example files in various formats (Excel, Access, etc), and give them explicit instructions for how to enter the data they want, and step by step instructions to export as CSV and upload.
I have no idea how the user will generate these CSV files - most likely from Excel, Access or similar, and therefore I've no way of knowing which column will contain which data.
I should emphasize that the users of this system will not be technically gifted
With these two things in mind, are you sure that CSV import is the best way to handle bulk user creation? It's a great technical solution, but the question is, will your users be able to take advantage of it?
It may be worth implementing an alternative bulk create option for those who don't get CSV or are scared off by Excel. Perhaps a JS grid that has the required fields where they could manually enter the data for each field and enter as many as they need at once, with a link to upload a CSV file as an option for those who would use it.
For the CSV option, since your users are not technically-minded, it would be better to give them instructions on how to create the csv files that specify the order fields should be in. Along with a screen shot and a sample file.
Another option is to require the field names be the first row of the document, and require that they use specific labels for the fields. If you do that, you could figure out from the first row what order the data is in. You could also put in a check that looks for the titles in the first row and if they're not found, tell the user they need to add the field names to the CSV and re-upload.

Updating an imported .csv in Hyperion v8.3

I have a csv imported into my Hyperion v8.3 bqy file. I have some custom columns and a pivot already created. I just want to refresh the data. In the past, I would hit Process Current and it would direct me to my computer and I could select the csv file to update from. Now it will not do that. It doesn't go to my computer at all.
Any ideas?
Eric,
I'm not a power user but I accomplish the same thing by ensuring that there is a file with the same name in the same location from which the original csv was imported and "Processing All". This allows me to update the data and import it into my bqy and automatically update any reporting based on the csv.
Don't know if this will help or not.
Dennis
dennis.van.camp#vanderlande.com
When you import a file as a Section into Hyperion, it maintains a link to the existing file for the exact path and file name. When that link is broken and that section gets a Process or Refresh command, is the only time it will prompt you for a new file. Otherwise, it will refresh the data from the existing.
So, if you want to force it to prompt you for a new file, you have to move or rename the old file.
But, you're looking for the Pivot and Computed columns to refresh. Two things on that ...
Computed Columns: You don't have to re-import the file, if the rest of your data is current. Each column can be refreshed individually by right-click, Modify, then clicking Ok. You don't have to change any of the code; just hit okay instead of Cancel.
Pivot: In the menu, you have to set your pivot options appropriately to update when you Process (when the underlying data is Processed, really), or Manually (when you Process that section).