I'm learning Redux and I somehow ended up with corrupted data. How do I purge all Redux data in Chrome browser?
There is nothing of value in my half developed app, so I don't need any precise manipulation, just a clean slate. But I don't want to clean other Chrome state like my history, passwords, and other stuff unrelated to this app.
I have installed Redux Devtools, but I can't find any edit functionality there. I've been searching for a while and all answers explain how to clean data from within the app. I don't want to write my own devtools in the app for this. This seems like a basic feature that should be included somewhere but I can't find it.
Related
I've been driving myself mad trying to get curl, wget, the python request module, and others, to simply get me logged in to a website and pull page text there. I can certainly request HTML from the site, but only as an anonymous user. I've spent a few hours with tricks like chrome's "copy cURL" feature, but the website in question is smart enough to defend against login playbacks.
All I want is a way, from the command-line, to do something like:
chrome.exe --output_to_file page.html https://www.endpoint.com/auth_access_only.html
Essentially, I'm looking for chrome to do for me what cURL does, but I want the command-line invocation to be executed as me. I can see how this might open a potential security issue, but I don't mind at all if I have to do something magical to authorize my script. I'm not looking to do anything evil - I just want to be able to write scripts that are as "me" as I am.
I guess that, if it's truly unavoidable, I could suck it up and dust off Internet Explorer. I'd really rather not do that. I'd feel so dirty.
This is possible, but it's not as simple as you're thinking.
You can use the Chrome Debugging Protocol to remote-control Chrome.
You will need to write some code to make this work - I have done similar tasks using the chrome-remote-interface library for Node.js.
Make sure you understand what a browser profile is and where your profile folder lives.
If Chrome is already running using your browser profile: make sure it was launched with --remote-debugging-port=9002 or similar.
If Chrome is not already running using your browser profile: launch it with --user-data-dir="C:\path\to\your\profile" --remote-debugging-port=9002 or similar.
The "running or not" part is a bit tricky - you cannot launch more than one Chrome instance with the same browser profile, but you need to use this user profile because your login data is stored there. It may actually be easiest to create a separate browser profile that is just used for this automated task, and log in to the site there too.
Then, at a high level, your Node.js code will need to connect to Chrome, load the page, wait for the response, and save it to a file. Have a look at the example code for the chrome-remote-interface library - you can definitely piece together what you need from there.
Another option which uses the same underlying technology is to use puppeteer which is another tool to automate Chrome. It is designed to start from a fresh profile every time. If you do this, you'll need to script more interactions:
Visit the site's login page
Type the login credentials into the form and click the login button
Visit the site's authenticated page and save it to a file.
The benefit of this approach is that the result should be more reliable, preventing issues like expired login sessions.
My idea is to have an arduino board that will communicate with the browser.
I want the arduino board to react (eg. blink led) when user is connected to a certain website.
User inputs on the board ( eg. press button) will affect the browser (eg. close tab, switch tab).
I started learning and creating simple examples using chrome extensions tutorial.
However, since I am not myself a skilled programmer, I'd like to know if it is possible to achieve those things aforementioned.
How I imagine it right now it will be:
Chrome extension writes into a json. Arduino reads data from json -> blink led.
Arduino writes values in json. Chrome extension can automatically see changes in the file and react accordingly -> close tab( so, without having a user re-installing each time the extension).
Are this scenarios possible? Which would be the easiest way to achieve this?
I would recommend to take a look at firebase. That's a really easy real-time database with a lot of good tutorials (even some by Google).
Then you should use a library like this one on your Arduino.
(there are some examples to look at)
And in your chrome extension you can fetch the values from the database really easy again.
When deploying a Polymer app to production, what's the recommended way to avoid requests for Polymer's source map files? The files platform.js.map and polymer.js.map weigh in at ~800K. Even if those downloads are deferred, surely there is some user impact (e.g., on mobile devices) simply spending bandwidth getting those files, isn't there?
Currently, my deployment process simply skips over the .map files, but when looking at the production site, I still see the browser trying to find them. Those requests fail with a 404, since the files aren't deployed. In theory the 404s shouldn't slow anything down, but it's still distracting to see 404s show up. It makes it look like there's a problem when in fact there isn't.
I could write a Grunt task to strip off the //# sourceMappingURL line from the associated .js files, but I was wondering if anyone has experimented with other means by which to drop the source maps. Or have people found that there is literally no impact on user experience when including those files?
I would consider it a browser bug if any browser loaded source maps when the user isn't using debug tools on the site in question. Are you noticing a significant number of requests to your source maps in your logs?
If you're using Google Chrome Devtools and want the source maps to be ignored, you can go to settings and uncheck the corresponding "Enable Javascript source maps" check box.
I appreciate this question may appear broad. But it is because I am looking anywhere and everywhere for a possible solution to do something very simple.
The goal is from a web page opened in Chrome, to scan the DOM, extract specific elements and save them silently in some way that I can then access.
There is no intention for any of this to be published as an app or extension, it is simply me wanting to access my own rendered browser data and extract and store this data on my own computer. For this reason, I am currently finding Chrome's exhaustive sandboxing security frustrating and irrelevant to say the least.
I have a working Chrome Extension which extracts all of the data I want, has a list of 5 strings that I want to save and that's as far as I have gotten.
I have looked into these areas:
Existing NPAPI Plugins (could not get npapi file io to work).
Creating my own NPAPI Plugin - seems like a huge overhead and learning exercise simply to get external access to 5 strings
Every aspect of Chrome extension (and even App) apis (particularly their localstorage which is not accessible from outside the extension)
Any other thoughts?
I realise there is a solution through creating my own NPAPI plugin but I would like to believe that there is another approach that allows me to link a constructed DOM with my local system. I am open to any other option? (I have considered a Linux purely bash approach but I need to generate the DOM as though it was in my browser).
I just want to be able to access specifically extracted parts of a DOM on my local system, not write an entirely new C++ plugin to facilitate this very basic feature.
i've got firebug (my team does not have firefox), and the IE developer toolbar(IE7) but I can not seem to figure out how to easily validate if the referenced files in a page are loading (i see javascript errors, but that doesn't succinctly point me to the exact file in a heirarchy of jquery - jqueryUI - datepicker files).
Additionally i'd like to be able to do this remotely, because on our corporate domain some files load fine for me, but not anyone else because they sometimes get encrypted to my domain user. So it would be nice if this process was either simple enough for my teammates to do it very quickly, or ... even better somehow with automation from a remote machine or web service request.
I thought I had seen a simple place on firebug to validate what loaded and what did not, but I can't find it now.
What are my options?
do you tried Javascript Lint?
Or the javascript plugin for Eclipse.
do you know YSlow?
It provides you with a set of excelent tools for web developing and I think it solves your question