I want to be able to build OCR scraping applications which are able to :
automatically detect when predetermined softwares are running on our computer (examples of softwares we need to scrap : web browsers, or any software window like for example word, a media player, powerpoint, a gaming software or whatever...). It should be able to detect when this window is moved in the user's screen and follow it. Sometimes a window can temporarily hide or overlap the scanned window : our scraper must be able to deal with these situations and continue scanning even in this case.
when it has detected one of the predetermined software is running, it should automatically open one specific excel files to export the data.
do an OCR scan of this window in real time and when a predetermined event happens in the scanned window, launch an extraction of the data (in a timelapse faster than 1 second since the predetermined event happened)
the data might be : text, numbers, OCR recognition of images, or simply the colour of predetermined pixels in the window.
extract the information and paste it into specific cells of an opened excel sheet. (please, let me know the other destination output formats your softwares allows).
call a macro in the excel output file after each pasting.
save a copy of each pasting in excel files stored in a determined directory (this action should not require to open excel when a new copy must be saved and stored, it should be done in the background).
several scrapers should be able to work at the same time on the same computer. For example, a situation with 2 scrapers extracting data from a gaming software + 1 scraper extracting the subtitles from a movie on VLC at the same time should be possible.
So at this stage, I was wondering if there were existing softwares that would allow me to build such scraping applications? (given that I don't code).
I googled it and found uipath, but I have no idea what it's worth.
The other option would be to hire someone, but i'd like to be able to do that myself in the future.
Thank you
You would need a professional to do this - or at least use some time to understand the program. UiPath would most likely be able do to this, but it will take some time to do without any experience.
I dont think UiPath is able to have multiple robots or jobs running at the same time - so being able to scrape multiple programs at once would be tricky to do.
Related
I would like to make my own CNC Editor.
I am looking for some guidance. I don’t know if it is even possible with HTML5. But it would be great if I can. If possible, please list what I should research and learn.
I don’t need it to be online accessible, I will only have it on my computer. I will be accessing it via local network from multiple different computers. I don’t want it accessing the internet, because it’s not always available.
Desired Features:
⁃ Read and Write files with different extensions (all files used are easily opened in notepad)
⁃ Store and retrieve data from a simple database file.
⁃ Make calculations
⁃ Have a text Editor window
⁃ Have a Display area for simple vector graphics depending on data loaded and provided by user.
It is possible but requires a lot of work. I would say that these are technologies you would need to master in order to pull this off:
Node.js (use express.js) - for storing and retrieving files from database and for reading/writing local files with extensions you want (server-side)
Vue.js or Angular.js or React - for building frontend interface to manipulate your vector graphics. It can also do calculations and It's good with svgs and that kind of stuff.
Electron.js (not mandatory) - it wraps it in native-app like experience. This framework actually gives you ability to write desktop apps for any os and arch.
So as I said, It would be a lot of work but its possible in the end.
Funny coincidence is that my brother is planning to build CNC machine so I might be doing this as well in next couple of months. Feel free to contact me if you need any further help!
UPDATE: You cant do it with just HTML5. It would be like trying to make wooden space shuttle.
I've an embedded system which runs firmware and has USB mass storage with size 79kB. So when you plug in the device to any computer(MAC/Windows), it pops as a 79kB flash drive. The firmware creates files which has transaction records. The objective is to display these transactions (tables and simple graphs) to the user. I've narrowed down to a web browser. So the user (with MAC/Windows PC) can plug in the USB device mass storage and open an HTML file in the mass storage drive and view all the transactions in the form of tables and simple bar graphs. The tricky part comes here: the device(firmware) needs to update it's clock, and this time input has to be sourced from the MAC/Windows PC. How can this be achieved?
This is the minimum requirement. Further, through the web browser the user wants to write some configuration parameters for e.g. through a text box and a submit button in the HTML page.
NOTE: Here the device has USB mass storage type and the web browser approach were selected so that there is no prerequisites for the user.
Please suggest an alternative if this can be done using another approach for e.g. a different class of USB or some other application locally available on MAC/Windows desktop/laptop. For e.g. the application should run on both on Mac and Windows i.e. the code should be the same but can be built into separate packages one for Mac and the other (.exe) for Windows. Please suggest a platform for this that has same source but can be built for both mac and windows. Thanks!
As far as I know, there is no way a web browser could write to a file. If such a thing was possible, it would be a huge security issue.
You have to write a piece of native software to do all the tasks you name. That can be done in pretty much any programming language, and if you're developing embedded systems I reckon you must have some experience in programming.
I'm looking at doing something similar and have an idea, though you may be better equipped to run with it than I am. Have the define contain a directory called "SET_DATE" with files "YEAR15" through "YEAR99", "MON01" through "MON12", "DATE01" through "DATE31", "H00" through "H23", "M00" through "M59", "S00" through "S59", and "SET"; each such file should start at a different sector, though none of the sectors in question need to contain any data (they need not physically be stored anywhere). To set the date to July 4, 2020 at 12:34:56pm, read the following files in sequence:
SET_DATE/YEAR20
SET_DATE/MONTH07
SET_DATE/DATE04
SET_DATE/H12
SET_DATE/M34
SET_DATE/S56
SET_DATE/SET
The last access should cause the unit to set its clock. If a user might want to set the clock more than once, that could be accommodated by either having a bunch of essentially-identical directories under SET_DATE (so setting the date the first time would use SET_DATE/00/YEAR20, the second time SET_DATE/01/YEAR20, etc.) and/or having the drive unmount/remount itself if necessary to clear out any caching.
I would think it unwise to have directory fetches trigger actions, since Windows or an anti-virus tool might decide to pre-cache all the directories in a drive when it is mounted. I would not expect Windows or a browser to eagerly load files, however, so I would think one could have read accesses trigger actions.
I have a couple of questions about offline storage in HTML5. It's not an area I am that familiar with so I was hoping someone could shed some light.
I want to develop a web based system (for mobile) that a user could potentially use offline. Obviously the first time they'd use it (and any time they need to sync data thereafter), internet access would need to be required.
Some text data would need to be downloaded in json format. Basically this will be a list of certain items that will appear in auto-complete forms in the app (ie. even if the user is offline and they want to enter a type of animal for example, they'd type in "Gir" and "Giraffe", being one of the items downloaded in that json list, would appear in the auto-complete box.
I would like the user to be able to take photos at certain points. This would need to be saved internally, such that when internet access is available it can be synced/uploaded to some web server.
Could someone tell me if what I am thinking of is achievable?
Thanks
Use a cache manifest to keep offline portions of your app cached. You can also store key/value data in Local Storage, including text and blobs (which you should be able to convert to photos).
This demo (and its documentation) may be a useful resource for offline photo storage.
Our web analytics package includes detailed information about user's activity within a page, and we show (click/scroll/interaction) visualizations in an overlay atop the web page. Currently this is an IFrame containing a live rendering of the page.
Since pages change over time, older data no longer corresponds to the current layout of the page. We would like to run a spider to occasionally take snapshots of the pages, allowing us to maintain a record of interactions with various versions of the page.
We have a working implementation of this (Linux), but the snapshot process is a hideous Python/JavaScript/HTML hack which opens a Firefox window, screenshotting and scrolling and merging and saving to a file. This requires us to install the X stack on our normally headless servers, and takes over a minute per page.
We would prefer a headless implementation with performance closer to that of the rendering time in a regular web browser, but haven't found anything.
There's some movement towards building something using Mozilla source as a starting point, but that seems like overkill to me, as well as a maintenance nightmare if we try to keep it up to date.
Suggestions?
An article on Digital Inspiration points towards CutyCapt which is cross-platform and uses the Webkit rendering engine as well as IECapt which uses the present IE rendering engine and requires Windows, natch. Nothing off the top of my head which uses Gecko, Firefox's rendering engine.
I doubt you're going to be able to get away from X, however. Since CutyCapt requires Qt, it requires either X or a Windows installation. And, similarly, IECapt will require Windows (or Wine if you want to try to run it under Linux, and then you're back to needing X). I doubt you'll be able to find a rendering engine which doesn't require Qt, Gtk, GDI, or Cocoa, and therefore requires a full install of display libraries.
Why not store the HTML that is sent out to the client? You could then use that to redisplay in a webbrowser as a page to show what it looked like.
Using your webanalytics data about use actions, you could they use that to default the combo boxes, fields etc to the values the client would have had, even change the CSS on buttons, etc, to mark them as being pushed.
As a benefit, you don't need the X stack, don't need to do any crawling or storing of images.
EDIT (Re Andrew Moore):
This is were you store the current CSS/images under a version number. Place an easily parsable version number in a comment in the HTML. If you change your CSS/images and use the existing names, increment the version number in the HTML output sent out.
The system that stores the HTML will know that it needs to grab a new copy and store under a new number. When redisplaying, it simply uses the version number to determine which CSS/image set to use.
We currently have a system here that uses a very similiar system so we can track users actions and provide better support when they call our help desk, as they can bring up the users session and follow what they did, even some-what live.
you can even code it to auto-censor sensitive fields when it is stored.
depending on the specifics of your needs perhaps you could get away with using one of the many free webpage thumbnail services? snapcasa, for example lets you generate thousands per month / no charge no advertizing .. (not ever used, just googled 'free thumbnail service') to find this.
just a thot
I have what seems like a typical usage scenario for users downloading, editing and uploading a document from a web page.
User clicks a link to download a document
User edits downloaded file
User saves the file
User goes back to the web page and uploads the new file with the changes
The problem is that downloaded files are typically saved in a temporary directory, so it can be difficult to find the file after it is saved. The application is for very non-technical users, and I can already imagine the problems with saved files being lost or the wrong versions being uploaded.
Is there a better way? Things I've thought about:
Using Google Docs or something similar.
Problems: forcing users to use new
application with less features,
importing legacy content, setting up
accounts for everyone to edit a
file.
Using WebDAV to serve the files. Not sure how this would work exactly, but seems like it should be possible
Some kind of Flash or Java app that manages downloads and uploads. Not sure if these even exist.
User education :)
If it matters, the files will be mostly Word and Powerpoint documents.
Actually, despite the fact that you have more flexibility with AJAX in developing application, the problem of uploading multiple files is not solved yet.
To the thoughts you've mentioned in your question:
Google Docs:
Online apps like Google docs are certainly appealing for certain use cases. However, if you'd like to upload Word and Powerpoint slides, you don't want the content to be changed once you've uploaded the document. The problem is that Google Docs uses its own data format and therefore changes some of the formats. If you go for an online app, I'd go for a Document Management Solution. I'm sure there are plenty (even free ones) out there; however, I didn't use any on them yet.
WebDAV It is possible and seems to me like the best solution. You can embed WebDav like any directory. Documents are locked until a user releases the document. Unfortunately, you don't have a web front end to manage the files or administer access restrictions.
It
Flash or Java app They do exist, for sure. I'd prefer Flash over Java since Flash Apps still run smoother within a browser. I would definitely not use a rich application, even if it is a Java Web Start app that can be downloaded and opens in a separate window. More and more, users seem to accept browser based web applications. Which brings me to point 4:
User education You can educate them, sure. But in the end you want them to want to use the system. Most often, users get easily used to a tool. However, if they don't like the tool, they're not going to use it.
Clear instructions to save to their desktop is a start. Then clear instructions to go to the desktop to re-up it. I've not run across an online MSWord viewer/editor or whatever format the file is, but I'm sure they exist, now that Google Docs and a few other online versions of MSOffice exist.
I would make sure that there are easy to follow instructions, plus a tutorial somewhere else (perhaps with a video too) to guide users through the process.