I am using google mod_pagespeed in Magento 1.9 and I have found what appears to be a very large collection of cache data from mod_pagespeed for this site found under:
/var/cache/mod_pagespeed/v3/
/var/cache/mod_pagespeed/v3/ - has some 14million inodes and due to that I am getting this Error message
E_WARNING: Unknown: Failed to write session data (files). Please
verify that the current setting of session.save_path is correct
(/var/lib/php/session).
Is it safe to delete these inodes?
Related
I'm using symfony 4, What does error stands for ?
Warning: SessionHandler::read(): open(/var/lib/php/sessions/sess_634q91mh896b6aa4jpjvlihmar, O_RDWR) failed: Permission denied (13)
When a user logs into a Symfony application, session information is stored on the web server. By default Symfony uses the native PHP session mechanism, storing session info in a file in /var/lib/php/sessions/ on Linux systems. Your error message is output by PHP and means it got a permissions error creating or re-opening a session file.
The error appears only intermittently because PHP removes old session files randomly about every 1/100th or 1/1000th page load. (On some Linux variants, old session files are removed by a cron job instead.)
https://symfony.com/doc/current/session.html says:
"some session expiration related options may not work as expected if other applications that write to the same directory have short max lifetime settings."
Try to avoid having multiple processes writing to the same sessions directory. I think I got the error message because both an Apache web server and php bin/console server:start were running at the same time. One process may have removed the other process's session file.
See PHP manual and Symfony manual for how to configure writing to separate directories. For example, I changed {Symfony directory}/config/packages/framework.yaml:
# Enables session support. Note that the session will ONLY be started if you read or write from it.
# Remove or comment this section to explicitly disable session support.
session:
# handler_id: ~
cookie_secure: auto
cookie_samesite: lax
handler_id: 'session.handler.native_file'
save_path: '%kernel.project_dir%/var/sessions/%kernel.environment%'
gc_probability: 100 # Run garbage collection always for
gc_divisor: 100 # investigating this problem only.
Another possibility is a problem in your Symfony code can cause the error message. The Symfony documentation says not to call the PHP session functions like session_start() directly since Symfony classes call them. A bug in my code caused an exception which I speculate caused the error message.
Related stack overflow questions: cleanup-php-session-files and how-does-php-know-when-to-delete-a-session
For those familiar with C code, see the PHP interpreter source code line that prints the error here
Hope this helps!
Not related to symfony 4, but you have to fix permissions in your /var/lib/php/sessions/ directory
I'm having a problem in Production when accessing the JSON (let's call it mydata.json) where I store the data of my ruby webapp, deployed with Heroku. The way I access the download of this file is by putting into the browser:
my-heroku-page.herokuapp.com/mydata.json
but I get the HTTP 500 error page.
Always using the browser, when I try to create it I get the Ruby page:
We're sorry, but something went wrong.
If you are the application owner check the logs for more information.
First I must say I'm using the same source code I used for another webapp that is actually working, I just modified the database.yml according to my new host, database, username and password. The previous webapp was perfectly working, i.e I could create the table and access the data. Secondly the error doesn't occur with localhost.
I tried to create a Dataclip in Heroku:
Select * from mydata order by created_at desc;
but I get the error:
"Your query couldn't be created."
ERROR: relation "mydata" does not exist
LINE 3: FROM mydata
Plus when I check the heroku pg:info I get 0 Tables
=== DATABASE_URL
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 9.5.5
Created: 2016-11-15 08:30 UTC
Data Size: 7.4 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
Add-on: postgresql-convex-54172
It seems like mydata.json is not created in production, but in localhost is working fine and I can create/download a blank one. I'm sure I'm missing something easy here, maybe in the database.yml.
I will edit the question if additional info are required. Any help is appreciated.
Thanks,
Simone
When I try to replicate a remote couchdb (on ubuntu 14.04- 64 bit) with my local pouchdb, I encouter this strange error.
My couchdb is proxied via nginx and running on https. Traffic from client to nginx is ssl while nginx to couchdb is simple http. Cors requests are enabled in couchdb. Nginx configuration is most similar to couchdb recommended. Sync from database is working fine however getting below errors when debugging via chrome Version 54.0.2840.100 (64-bit) .
Following is the full stack trace of the error.
raven.min.js:2 Error: There was a problem getting docs.
at finishBatch (http://localhost:8100/lib/pouchdb/dist/pouchdb.js:6410:13)
at processQueue (http://localhost:8100/lib/ionic/js/ionic.bundle.js:27879:28)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:27895:27
at Scope.$eval (http://localhost:8100/lib/ionic/js/ionic.bundle.js:29158:28)
at Scope.$digest (http://localhost:8100/lib/ionic/js/ionic.bundle.js:28969:31)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:29197:26
at completeOutstandingRequest (http://localhost:8100/lib/ionic/js/ionic.bundle.js:18706:10)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:18978:7
at d (http://localhost:8100/lib/raven-js/dist/raven.min.js:2:4308) undefineda.(anonymous function) # raven.min.js:2(anonymous function) # ionic.bundle.js:25642(anonymous function) # ionic.bundle.js:22421(anonymous function) # angular.min.js:2processQueue # ionic.bundle.js:27887(anonymous function) # ionic.bundle.js:27895$eval # ionic.bundle.js:29158$digest # ionic.bundle.js:28969(anonymous function) # ionic.bundle.js:29197completeOutstandingRequest # ionic.bundle.js:18706(anonymous function) # ionic.bundle.js:18978d # raven.min.js:2
raven.min.js:2 Paused in lessondb replicate Error: There was a problem getting docs.
at finishBatch (http://localhost:8100/lib/pouchdb/dist/pouchdb.js:6410:13)
at processQueue (http://localhost:8100/lib/ionic/js/ionic.bundle.js:27879:28)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:27895:27
at Scope.$eval (http://localhost:8100/lib/ionic/js/ionic.bundle.js:29158:28)
at Scope.$digest (http://localhost:8100/lib/ionic/js/ionic.bundle.js:28969:31)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:29197:26
at completeOutstandingRequest (http://localhost:8100/lib/ionic/js/ionic.bundle.js:18706:10)
at http://localhost:8100/lib/ionic/js/ionic.bundle.js:18978:7
at d (http://localhost:8100/lib/raven-js/dist/raven.min.js:2:4308)
The network logs in chrome show that some requests are cancelled
I am using couchdb version - 1.6.1 and pouchdb version - 5.3.2.
I use following command to replicate dbs:
myDB.replicate.from(remote_db_url,{
live: true,
retry: true,
heartbeat: false
})
Also it would be great if someone can shed some light on heartbeat parameter .
Note: I'm not able to solve the error you describe. Maybe a full stack trace rather than a screenshot might help...
But I will try to shed some light on the heartbeat parameter: Reading the docs already helps a little. See the advanced options for the replicate method:
options.heartbeat: Configure the heartbeat supported by CouchDB which keeps the change connection alive.
So let's look into CouchDB's docs to see what this parameter does:
Networks are a tricky beast, and sometimes you don’t know whether there are no changes coming or your network connection went stale. If you add another query parameter, heartbeat=N, where N is a number, CouchDB will send you a newline character each N milliseconds. As long as you are receiving newline characters, you know there are no new change notifications, but CouchDB is still ready to send you the next one when it occurs.
So basically it seems to be a polling mechanism that sends a message (f.e. a newline) every n milliseconds (where n is the heartbeat value you specify) to make sure the connection between two databases is still working.
Setting the value to false will disable this mechanism.
Regarding the which value can be used for this parameter:
The PouchDB docs further state, that the changes method has a similar parameter described like this:
options.heartbeat: For http adapter only, time in milliseconds for server to give a heartbeat to keep long connections open. Defaults to 10000 (10 seconds), use false to disable the default.
My Windows Store app keeps getting rejected from certification testing and I managed to reproduce a consequent crash when running appverif's LuaPriv-check. I get this output though:
AVRF: failed to create verifier log file \??\C:\Users\xx\AppVerifierLogs\yy.exe.0.dat (status C0000022)
Process Monitor tells me yy.exe got ACCESS DENIED on a CreateFile operation in this folder. I have set full access to all users (the user reported in the log was the same as the owner of the folder). I am running Visual Studio and Application Verifier as Administrator, but this does not seem to apply. What is the correct way of giving user xx full access to this folder on win8? I have attempted to use different log folders for appverify but with no success. Anyone else able to use this tool with Store-apps?
This post describes similar issues. Attempting to run AppVerif –sppath C:\MyLogsLocation as in the suggested workaround gives AVRF: Error: Incorrect image name: <
So does running appverif -enable handles locks -for myapp.exe -sppath c:\MyLogsLocation
It might be a bug in app verifier.
Have a look at these links:
http://social.technet.microsoft.com/Forums/en-US/5ed560c0-76af-401d-8150-8cd1e69d0b8a/why-app-verifier-can-not-create-log-file?forum=windowssdk
http://blogs.msdn.com/b/dougste/archive/2010/01/11/generating-application-verifier-logs-for-web-applications.aspx
0xc000022 is STATUS_ACCESS_DENIED. The process doesn't actually have write permissions, even if it looks like it should. This MSDN blog explains there is a bug in App Verifier so even if you specify -sppath the value won't be honoured unless you first delete the %WINDIR%\system32\config\AppVerifierLogs\ folder.
API Store is throwing errors when I try to create or edit an application
java.sql.SQLException: Can't call commit when autocommit=true
I've added the setting of
init-command='set autocommit=0'
to the my.cnf file
I've also added the flag:
?relaxAutoCommit=true
to the connection string but to no avail. I continue to get this error.
I am using the same mysql database for both the WSO2_CARBON_DB and teh WSO2AM_DB plus I have a single publisher node and two separate store nodes all pointing to the same mysql datasource.
I notice the application edit is saved (or the new application is created) but the exception is still thrown in the console and an error message appears in the user interface (as per the error at the top of this question).
Is there some other setting, within the WSO2 conf files that I have to tweak in order to get this to work properly?
Add both autoReconnect and relaxAutoCommit flags to the jdbc url of your defined "WSO2AM_DB" datasource in master-datasources.xml file. This will resolve your issue.
<configuration>
<url>jdbc:mysql://localhost:3306/AM_DB?autoReconnect=true&relaxAutoCommit=true</url>
<username>xxxx</username>
<;password>xxxxx</password>
EDIT: I updated the url to reflect the correct syntax for escaping the ampersand.
just for the sake of completeness, the JDBC URL shoud be
jdbc:mysql://localhost:3306/WSO2CARBON_DB?autoReconnect=true&relaxAutoCommit=true