File data propagation between controller actions in Spring MVC - mysql

I have to process an Excel file in order to insert some data into a Mysql database on a web based application. I am using Spring MVC as the architecture.
The problem is I need to include a mid-step in which the user can review data to be inserted before actual insertion. So the typical process would be for the user to upload the file, then the application would show another webpage with the processed information, and a "Apply changes" button that would actually take all this information and store it into my database.
At first, in order to propagate the data in the file throughout the three steps (the first page where you upload, the mid-step page and the final controller action), I just used a form where I would use hidden fields to store the data, in order to avoid having to process the file twice (one for the result presentation in the mid-step, and the other for the actual data storing). The problem is the Excel file has so much information that my mid-step page gets overloaded and it takes too long to render (and to further process it in the controller when getting ahold of the parameters).
So I though about using either a temporary file or a temporary relation in my database to store the data between steps. Are there any other ways to propagate these data between controller actions? I wouldn't like to leave garbage data behind nor process the file again (since it takes quite some time) so what would be the best approach for this?

I would go for a temporary table in this case, if the processing of it takes not too long. First you are able to use the sequences like in the final table and all you need to do is issue an "insert into select from" statement once the user clicks ok in the second screen. This would be a simple solution to implement i'd say and all the processing is done.
If the processing part is "huge" and the user could check certain things before this big task a solution could be to split up the processing in two parts and store this first results in a temporary file and implement a screen based on that one. After the user clicks ok you can launch an async task that does the heavy lifting and cleans the file after hes done.

Related

Access linked Backend tables is slow to open for 1st table, then fast once record-locking file is created - can I keep this lock file open?

I have been working successfully so far with a database that has a split backend. During development the front and backends have been in the same drive, but now the backend is on a network drive in preparation for multiple users.
My issue is that whenever I now open a linked table (or form that reads data from it) from the frontend, the first table is always slow to load. And if I then close that table and open another table, that's also slow to to load. But if I open any table and then open a second table, the second table and any others thereafter is fast.
It seems it may be that it's related to the locking file having to be created, and then closing when the table closes, meaning there's a delay when that file has to be created again for the next table.
Is there way to create the record-locking file on application launch so that it remains available throughout a user's session?
Thank you
Yes, this is a wide known problem and for years we suggest you adopt what is called a persistent connection.
If you are doing development, then often what one can do is simple click on a table (any linked table) to open it, now you can work, design forms, and not experience huge delays during the development process.
While the above fixes the delays during and when developing, the SAME concept and approach can be used for when you run your application.
Like every applcation, you no doubt have some startup code. This code can go in the first form that you launch on applcation startup.
So, you can create a standard code module. Say it is Module1. In that module, you can place this code:
Option Compare Database
Option Explicit
Public rstPersist As DAO.Recordset
Sub MyOpenPersist()
' opens a back end table to force and keep
' open the connection for greatly improved
' speed
Set rstPersist = CurrentDb.OpenRecordset("tblDefaults")
End Sub
And then in the FIRST form you launch on startup, in the on-load even, you can include this code:
Call MyOpenPersist
Now, this will open a table to a "global" rstPersit recordset.
Now, and from that point on-wards, since the locking file has been created (which takes a long time as you note), then the applcation, forms, and everything will now run without that delay.
It does not matter which table you choose to open - only requirement is that the table is a linked table, since you can (and might) have some local tables in the front end for things like settings or whatever. So, in this case, I open up a table called defaults (and it great to have that table open, since it is a 1 record table with default things like City and default area code for phone numbers etc.). So, I in effect kill two birds with one stone so to speak, since I need a table of defaults for many things in the applcation anyway.

Creating a project/web application of student test in which we collect inputs from "Form" and will store them in DB and will further access them & on

I am Creating a project/web application in which we collect inputs from "Form" and will store them in DB. Will access data in future for further calculation and show the result.
So i want to know in what language i should start that project in php or java?
Details of project/web application:
1. First there will be a pre form which will just create a little entry in db
2. then on the bases of that little entry new complete full test will be started
3. After completion of test from step 2 result will be stored in same data entry for further calculation
4. After calculating result will be shown to user
So how do we plan this kind of project, do we need sessions in them to change those form windows or normal jquery will do that.
Most importantly if someone can relate me with the structure of this project or if someone have same pre created any project, link, website, web application anything they can refer that will be really helpful.
I know i am sounding lame but yes i am beginner in this.
Thanks.
First, the approach will be using PHP where you can store data and manipulate later:
Make a login page will be wise because it will track which user is taking the test, you can use the session here.
Storing data of user once they register and provide test and store in another table.
For showing a result you can use both tables to show the result for every other user.

Why can't access keep up with my VBA code in regards to building tables in access from existing xml files using vba?

The situation more in depth:
I am dealing with around one million existing xml files and I have created a program in VBA, within Access 2016 that converts these files to a different type of xml with a new file name, and then saves them to the new file name in a different folder.
As the program builds these new files it also adds entries to about 15 access tables within the database in which the VBA code for the program exists.
The only issue that I am having is that as my code increments the new file name and attempts to add it to the main access table, access "hangs up" and repeats several random increment numbers, a random amount of times which ruins the talbe output and does not allow repeated new file names to save (or they save over the original).
Each time I run the program the result is different, but when I put a stop in my code after the completion of each file conversion and creation and manually cycle through each files by hitting play and refreshing the table with the increment problem, the increment in my talbe works perfect and the results are consistent.
I guess what I am asking is, is it normal for when using VBA code to build tables from existing files, for access to "hang up" when adding a new record to a table using an incrementor?
If not, could I add a refresh of the table after I add a new record entry or before I save the new file to alleviate the issue, and how do I code that refesh of a table? (I already use .Update upon the completion of .AddNew)
Or, could I add a pause in similiar locations to allow access to catch up before VBA continues its calculations and processing?
Don't ask me how I figured it out but through your answers, talking to colleagues, searching for related questions, and redefining the meaning of life, I found the answer!
The problem was not with my code but with how fast my system executed the code compared to how fast Access executed the .Update function. My program was getting ahead of the completion of Access's .Update table function to a point where my incrementor would "hang up" and not see the new table entry because it was not there yet.
I added a few millisecond pause after the .Update and poof, perfectly working incrementor and table updating combo. I will be narrowing the delay time down to the minimum needed shortly, but my question is answered; Access program processing is slower than VBA code calculations by a lot. Thanks Everyone!

How to manage "releases" with MS Access

I have an MS Access 2016 application that a few people use in one department. I know this whole thing has web dev written all over it but this access database has been their process for a while and there is no time right now to switch over.
Recently, a different department wants to use this application, but having their own copy. Currently, if I need to make changes, I'll make the changes in a copy of the app, they send me a current version when I'm ready to import their data, I import it and send them back a new one. However, currently I copy the data table by table and past it into the new database. This is inefficient and tedious, and now with 2 sets of data I'd be doing this for, that's crazy. There's over 20 tables so I don't want to have to manually copy over 40+ tables across the 2 apps for even the smallest change like altering a message to the user.
I know I can copy the code so I can avoid importing the data, but sometimes for big changes I'll change over 15-20 vba files.
So, a couple questions:
1.Is there a way to generate insert statements for the entire database that I could run in a script? So when I create the new copy I just upload 1 file and it populates all the data?
2.Are there any kind of dev tools that will help this process? Right now I'm thinking that it's just a downfall of creating an MS Access app, but there must be some way that people have made the "new release" process easier. My current system seems flawed and I'm looking to have a more stable process.
EDIT:
Currently I have all my data stored locally, attached to the same access file as the front end. Since I will have 2 different departments using the same functionality, how do I manage the data/the front-end? These 2 departments should have their own access file to enter data using the forms, so having 1 front end between the 2 departments won't work.
Also, should I create 2 separate back-ends? Currently I would have nothing to distinguish what is being inserted/changed/deleted from one department from the other. If I were to attach a field specifying who entered the record, that would require a complete overall of all my queries which I don't have the time for as there are deadlines I need to meet.
First thing is to split the database. There is a wizard for this.
Then you can maintain the frontend without touching the real data.
Next, consider using a script to distribute revised versions of the frontend. I once wrote an article on one proven method to handle this:
Deploy and update a Microsoft Access application in a Citrix environment

Bulk uploading data to Parse.com

I have about 10GB worth of data that I would like to import to Parse. the data is currently in JSON format which is great for importing data using the parse importer.
However I have no unique identifier to these objects. Of course they have unique properties e.g. a url, the ids pointing to specific objects need to be constant.
What would be the best way to edit the large amount of data -in bulk- on their server without running into request issues (as I'm currently on the free pricing model) and without taking too much time to alter the data.
Option 1
Import the data once and export the data in JSON with the newly assigned objectIds. Then edit them locally matching the url then replace the class with the new edited data. Any new editions will receive a new objectId by Parse.
How much downtime between import and export will there be as I would need to delete the class and recreate it? Are there any other concerns with this methodology?
Option 2
Query for the URL or array of URLs and then edit the data then re-save. This means the data will persist indefinitely but as the edit will consist of hundreds of thousands of objects will this most likely over run the request limit?
Option 3
Is there a better option I am missing?
The best option is to upload to Parse then edit through their normal channels. Using various hacks it is possible to stay below the 30pings/second offered as part of the free tier. You can iterate over the data using background jobs (written in Javascript) -- you may need to slow down your processing so you don't hit limits. The super hacky way is to download from the table to a client (iOS/Android) app and then push back up to Parse. If you do this in batch (not a synchronous for loop, by the way), then the latency alone will keep you under the 30ping/sec limit.
I'm not sure why you're worried about downtime. If the data isn't already uploaded to Parse, can't you upload it, pull it down and edit it, and re-upload it -- taking as long as you'd like? Do this in a separate table from any you are using in production, and you should be just fine.