Postman is crashing about 5 seconds after starting. My last operation was a bulk load for ElasticSearch. The load worked as far as I can tell. Now Postman crashes. I have tried restarting several times, but all it does is hang for a few seconds and then crash.
I have Chrome version 56.
Postman version: 4.10.4
In case someone else is also facing this problem, Postman stores all its data locally in an application-specific indexed DB file. In my case also, the data got bit too large and postman started crashing.Following steps might help, they did in my case:
chrome://indexeddb-internals
Look for: chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop
Under the "Paths:" section note the location of the chrome-extension_fhbjgbiflinjbdggehcddcbncdddomop_0.indexeddb.leveldb folder
Navigate to this location on your local system.
Copy this entire contents of this folder and save it to a safe location on your local system to be used later (your collections will be in DB file in this folder)
Remove the Postman extension from Chrome and then Re-Add the postman extension.
Postman should open but it will be default settings with none of your collections.
If you're in Linux just execute the following command from the terminal and start postman again
pkill -fi Postman
Go To:
C:\Users\%Username%\AppData\Local\Google\Chrome\User Data\Default\Storage\ext\fhbjgbiflinjbdggehcddcbncdddomop\def\IndexedDB\chrome-extension_fhbjgbiflinjbdggehcddcbncdddomop_0.indexeddb.leveldb\
Rename *.log files
And relaunch postman
I have deleted all files from the below folder and launched Postman. Crashing issue is solved. (To be on the safe side, I took a backup of this folder before deleting.)
C:\Users\%User%\AppData\Local\Google\Chrome\User Data\Default\Storage\ext\fhbjgbiflinjbdggehcddcbncdddomop\def\IndexedDB\chrome-extension_fhbjgbiflinjbdggehcddcbncdddomop_0.indexeddb.leveldb
Related
I've been trying to follow the
Setting Up Stackdriver Debugger for Java applications on Google Compute Engine, but am running into issues with Stackdriver Debug.
I'm building my .war file from a separate build server, then deploying it to my GCE server. I added the agent to the start command via /etc/defaults, and my app appears in the https://console.cloud.google.com/debug control panel. The version I set in the run command matches the revision that shows up in the source-context(s).json files.
However when I click open the app, I see the message that
No source version information was provided by the deployed application
I connected the app's git repo as a mirrored cloud repository, and can browse the source files in the sidebar of the Stackdriver Debug page. But, If I browse to a file and add a breakpoint I get an error that the error "File was not found in the executable."
I have ran the gcloud preview app gen-repo-info-file command, which created two basic json files storing my git repo and revision. Is it supposed to do anything else?
I have tried running jetty using both normal and extracted modes. If I have jetty first extract the war file, I can see the source-context.json filesin the WEB-INF/classes directory.
What am I missing?
https://github.com/GoogleCloudPlatform/cloud-debug-java#extra-classpath mentions
you can update the agentPath showing your WEB-INF/class directory.
-agentpath:/opt/cdbg/cdbg_java_agent.so=--cdbg_extra_class_path=/opt/tomcat/webapps/myapp/WEB-INF/classes
For multiple class paths:
-agentpath:/opt/cdbg/cdbg_java_agent.so=--cdbg_extra_class_path=/opt/tomcat/webapps/myapp/WEB-INF/classes:/another/path/with/classes
There are a couple of things going on here.
First, it sounds like you are doing the correct thing with gen-repo-info-file. The debugger agent should pick up the json files from the WEB-INF/classes directory.
The debugger uses fuzzy matching to find source files, so as long as the name of the .java file matches a file in your executable, you should not get that error.
The most likely scenario given the information in your question is that you are attaching the debugger to a launcher process, rather than your actual application. Without further details, I can't absolutely confirm that, though.
If you send us more details at cdbg-feedback#google.com, we can look more closely at your case to see if we can understand exactly what's happening, and potentially improve our documentation, since it sounds like you followed the docs pretty closely.
I made one Dyno app in Heroku using node.js
that Dyno task is to collect data and create json file daily
but I don't know how to download them locally
I tried
http://myappname.heroku.com/filename.json
but failed
Heroku is new for me,so please don't treat me like advance user
You cannot do this.
If your code is writing a JSON file to the Heroku server daily, that file is almost instantly being deleted, so there is no way you can download it.
Heroku dynos are ephemeral. This means that any data you 'save' to the filesystem will be deleted almost instantly. If you need to save files, you should save them to a file service like Amazon S3 -- then download them through there.
Save your JSON file to /public folder.
Ensure that your app.js has the following:
app.use(express.static(__dirname + '/public'))
Now, you should be able to access:
http://myappname.heroku.com/filename.json
Where is chrome.storage.local stored for Chrome Apps in OSX yosemite?
I finally found it: ~/Library/Application Support/Google/Chrome/Default/Local App Settings/{{chrome-app-id}}. The whole {{chrome-app-id}} folder is a leveldb database. I was able to open it and inspect the contents of the stored file using the leveldb-ruby gem. Just do the following
require 'leveldb'
db = LevelDB:DB.new '~/Library/Application Support/Google/Chrome/Default/Local App Settings/{{chrome-app-id}}'
You can now query the database using the db object. By the way if you get a weird error saying that the db is being used by someone else make sure you kill chrome and erase the LOCK file.
localStorage is located in ~/Library/Application Support/Google/Chrome/Default/Local Storage. You'll need the ID to find the correct files but they'll be prefixed with chrome- and have a file extension of .localstorage.
Based on some picking through the POSTMan App, it looks like it makes a call to chrome.storage.local and the data on my Mac is located here: ~/Library/Application Support/Google/Chrome/Default/Local App Settings/<ID>/000003.log.
i've build my application on localhost and running it without any error. i choose openshift to host my application code but i have a problem to make it works perfectly like on my localhost.
i want to add directive of AllowEncodedSlashes and set it to On in my apache2 configuration file, i have tried to edit the file from ~/php/configuration/etc/conf/httpd.conf and then restart the server using ctl_all restart. but the result are http error code 400 (Bad Request). before i add this directive into httpd.conf the result are http error code 404, i am just not sure if the changes are in effect or not. or apache is bugging?
is there anyone knows howto make this work for me?
See if you can add it into .htaccess file instead of httpd.conf file. Also the best way to troubleshoot these problems would be by reviewing your application logs for errors. All you have to do is run "rhc tail {appName}" from your client machine (where the rhc client tools are installed). That gives you the current log entries.
To get to the entire log, you'll want to ssh onto the gear(s) on which the language framework/cartridge is installed using this FAQ and run: more ~/{cartridgeID}/logs/*.log
where {cartridgeID} is your framework cartridge like nodejs-0.6, or your embedded cartridge logs like mysql-5.1.
I created a feature request for this. See this Trello card and feel free to vote it up.
im running Hudson continuous integration for db unit.
when i run the job the console output is displaying the SUCCESS, but then why do the Parsed Console Output keep returning this error:
ERROR:Failed to parse console log :
log-parser plugin ERROR: Cannot parse log: Can't read parsing rules file:
i already installed the parse-log plugin & i already restarted the Hudson..
i installed the plugin using remote PC
any help and suggestion is appreciated. Thanks!
1) Place the Parser Rule File in the JENKINS_HOME location.
2) Configure that log parser console output in the Global COnfiguration settings and Name it.
3) Add this option in the Post Build Actions and Select the Name
ok silly me..
i forgot to configure the global configuration in hudson that link to the parser rule file..
problem solved.
I'm posting this in case anyone else has a specific case of this problem. This issue started when upgrading from 1.509.2 to 1.554.3... I had the parsing rules file in the win\system folder which was a known issue when running Jenkins as a service. Well I guess they fixed it by this version. I moved the parsing rules back into Jenkins Home folder and it worked fine again.