One week ago, Fetch method of Google.Apis.v2.DriveService.Files.Update throw exception 412 Precondition failed.
I had realize previous operation getting file using DocID. Later, I update this file without changes then I get this error message.
Errors [
Message[Precondition Failed] Location[If-Match - header] Reason[conditionNotMet] Domain[global] ]
Could you help me?
Related
I have moved one magento 2 website from one server to another, after configuration, I got below error on category pages:
1 exception(s):
Exception #0 (Exception): Recoverable Error: Argument 1 passed to Mageplaza\Core\Helper\AbstractData::__construct() must be an instance of Magento\Framework\App\Helper\Context, instance of Magento\Framework\ObjectManager\ObjectManager given, called in /SOME_PATH/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php on line 93 and defined in /SOME_PATH/app/code/Mageplaza/Core/Helper/AbstractData.php on line 56
Exception #0 (Exception): Recoverable Error: Argument 1 passed to Mageplaza\Core\Helper\AbstractData::__construct() must be an instance of Magento\Framework\App\Helper\Context, instance of Magento\Framework\ObjectManager\ObjectManager given, called in /SOME_PATH/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php on line 93 and defined in /SOME_PATH/app/code/Mageplaza/Core/Helper/AbstractData.php on line 56
I have tried below things to resolve above:
Reindexing
Re-save category pages from backend
Created new category, and found its page working fine.
It seems there is a problem with database where old category urls need to be reindexed/rewritten or processed some way.
Can anyone help me to resolve this or any guide how I can troubleshoot this further?
Any help is appreciated!
Thanks
Deleting var/di directory resolves the problem. I didn't need to run any CLI command nor I need to do any cache clear stuff!
I'm trying to import a dataset using neo4j-import. Unfortunately the import fails with the following error message which is not saying much to me. Does anyone has an idea?
Thank you
The command was:
./neo4j-import --into /home_expes/dd77474h/neo4j-community-3.0.7/data/databases/graph.db/ --nodes /home_expes/dd77474h/Indexing-server/reduced_dbpedia_nodes.csv --relationships /home_expes/dd77474h/Indexing-server/reduced_dbpedia_relations.csv --stacktrace true --id-type
reduced_dbpedia_nodes.csv:
id:ID,uri,:LABEL
7,"http://dbpedia.org/resource/Albedo",Resource
reduced_dbpedia_relations.csv
:START_ID,:END_ID,:TYPE
1,2,"http://www.w3.org/1999/02/22-rdf-syntax-ns#type"
Error message:
Relationship --> Relationship Sparse
[>:231.50 MB/s--------------------------|LINK(3)==|*v:130.76 MB/s-----------------------------] 171M
Done in 24s 824ms
Minority relationships
[*INSERT--------------------------------------------------------------------------------------] 540K
Done in 14m 31s 126ms
Count groups
[*>:??----------------------------------------------------------------------------|COUNT------]12.2M
Done in 2s 786ms
Gather
java.lang.RuntimeException: Panic called, so exiting
at org.neo4j.unsafe.impl.batchimport.staging.AbstractStep.assertHealthy(AbstractStep.java:155)
at org.neo4j.unsafe.impl.batchimport.staging.ProducerStep.process(ProducerStep.java:84)
at org.neo4j.unsafe.impl.batchimport.staging.ProducerStep$1.run(ProducerStep.java:54)
Caused by: java.lang.IllegalStateException: There's no room for me for startIndex:28899 with a group count of -25966. This means that there's an asymmetry between calls to incrementGroupCount and actual contents sent into put
at org.neo4j.unsafe.impl.batchimport.RelationshipGroupCache.scanForFreeFrom(RelationshipGroupCache.java:203)
at org.neo4j.unsafe.impl.batchimport.RelationshipGroupCache.put(RelationshipGroupCache.java:159)
at org.neo4j.unsafe.impl.batchimport.CacheGroupsStep.process(CacheGroupsStep.java:48)
at org.neo4j.unsafe.impl.batchimport.CacheGroupsStep.process(CacheGroupsStep.java:31)
at org.neo4j.unsafe.impl.batchimport.staging.ProcessorStep.lambda$receive$2(ProcessorStep.java:97)
at org.neo4j.unsafe.impl.batchimport.executor.DynamicTaskExecutor$Processor.run(DynamicTaskExecutor.java:243)
Import error: Panic called, so exiting
Caused by:Panic called, so exiting
java.lang.RuntimeException: Panic called, so exiting
at org.neo4j.unsafe.impl.batchimport.staging.AbstractStep.assertHealthy(AbstractStep.java:155)
at org.neo4j.unsafe.impl.batchimport.staging.ProducerStep.process(ProducerStep.java:84)
at org.neo4j.unsafe.impl.batchimport.staging.ProducerStep$1.run(ProducerStep.java:54)
Caused by: java.lang.IllegalStateException: There's no room for me for startIndex:28899 with a group count of -25966. This means that there's an asymmetry between calls to incrementGroupCount and actual contents sent into put
at org.neo4j.unsafe.impl.batchimport.RelationshipGroupCache.scanForFreeFrom(RelationshipGroupCache.java:203)
at org.neo4j.unsafe.impl.batchimport.RelationshipGroupCache.put(RelationshipGroupCache.java:159)
at org.neo4j.unsafe.impl.batchimport.CacheGroupsStep.process(CacheGroupsStep.java:48)
at org.neo4j.unsafe.impl.batchimport.CacheGroupsStep.process(CacheGroupsStep.java:31)
at org.neo4j.unsafe.impl.batchimport.staging.ProcessorStep.lambda$receive$2(ProcessorStep.java:97)
at org.neo4j.unsafe.impl.batchimport.executor.DynamicTaskExecutor$Processor.run(DynamicTaskExecutor.java:243)
Thanks for your help, I found the bug. FYI see https://github.com/neo4j/neo4j/pull/8778 for fix.
I just started getting this error on a script that hasn't changed. Updating to boto3==1.4.3 had no effect. Doesn't look like throttling, does it? Not sure where to go with this one, any suggestions would be much appreciated.
File "/Library/Python/2.7/site-packages/botocore/client.py", line 251, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Python/2.7/site-packages/botocore/client.py", line 537, in _make_api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (Unavailable)
when calling the DescribeSecurityGroups operation (reached max retries: 4):
Tags could not be retrieved.
Edited to add: As noted in comment below, this is coming from a seemingly innocuous call to describe_security_groups().
Another update: OK, this was transient and is no longer happening. Le sigh. Maybe some timeout due to network issues or something. I'll leave it up on SO for now, in case someone sees this and has a flash of insight.
Consider the code below, that tries to create a new SerpKeyword within a transaction and prints to the console to show where it is.
if (!serpKeyword) {
println "I DIDN'T FIND THE KEYWORD!"
SerpKeyword.withNewTransaction {
println "SO NOW I'M BEGINNING A TRANSACTION"
serpKeyword = new SerpKeyword(
keyword: searchKeyword,
geoKeyword: geoKeyword,
concatenation: concatenation,
locale: locale
)
println "NOW I'LL SAVE THE KEYWORD"
serpKeyword.save(failOnError: true, flush: true)
println "AND NOW THE KEYWORD IS SAVED"
}
}
The console output I see right away is:
I DIDN'T FIND THE KEYWORD!
SO NOW I'M BEGINNING A TRANSACTION
NOW I'LL SAVE THE KEYWORD
I never see the last line of my output, indicating that the record never saves. I've tried this both with and without the options that I'm passing into save. Regardless, it just hangs for a while, and eventually I get this stacktrace:
Got error -1 from storage engine. Stacktrace follows:
java.sql.SQLException: Got error -1 from storage engine
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1055)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3491)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3423)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1936)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2060)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2019)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1937)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1922)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at com.reachlocal.grails.sales.AdvertiserConnectionService$_findOrCreateSerpKeyword_closure9$$EO2A9QHA.doCall(AdvertiserConnectionService.groovy:624)
at org.grails.datastore.gorm.GormStaticApi.withNewTransaction(GormStaticApi.groovy:696)
at com.reachlocal.grails.sales.AdvertiserConnectionService$$EO2A9QHA.findOrCreateSerpKeyword(AdvertiserConnectionService.groovy:615)
at com.reachlocal.grails.sales.AdvertiserConnectionService$$EO2A9QHA.createSerpEntryForKeyword(AdvertiserConnectionService.groovy:659)
at com.reachlocal.grails.sales.AdvertiserConnectionService$$EO2A9QHA.addKeyword(AdvertiserConnectionService.groovy:51)
at com.reachlocal.grails.serp.SerpController$_closure9.doCall(SerpController.groovy:77)
at grails.plugin.cache.web.filter.PageFragmentCachingFilter.doFilter(PageFragmentCachingFilter.java:195)
at grails.plugin.cache.web.filter.AbstractFilter.doFilter(AbstractFilter.java:63)
at org.jasig.cas.client.session.SingleSignOutFilter.doFilter(SingleSignOutFilter.java:65)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:680)
What gives?
You may get the "Got error -1 from storage engine" for several reasons:
Your database is out of disk space
You have the innodb_force_recovery switch in your my.cnf file
Mismatched MySQL tablespace ids.
The best way to troubleshoot this issue is to take a look at the MySQL error log
Do you have the id generation strategy set as assigned any thing other than a sequence/auto in the domain class serpKeyword?
It will be helpful to debug if you can add the domain class in the post as well.
Further to my adventures with Erlang and ErlyDB. I am attempting to get ErlyDB working with BeepBeep
My ErlyDB setup works correctly when run outside of the BeepBeep environment (see Debugging ErlyDB and MySQL). I have basically take the working code and attempted to get it running inside BeepBeep.
I have the following code in my controller:
handle_request("index",[]) ->
erlydb:start(mysql,Database),
erlydb:code_gen(["thing.erl"],mysql),
NewThing = thing:new_with([{name, "name"},{value, "value"}]),
thing:save(NewThing),
{render,"home/index.html",[{data,"Hello World!"}]};
When I call the URL, the response outputs "Server Error".
There is no other error or exception information reported.
I have tried wrapping the call in try/catch to see if there is an underlying error - there is definitely an exception at the call to thing:new_with(), but no further information is available.
The stacktrace reports:
{thing,new,[["name","value"]]}
{home_controller,create,1}
{home_controller,handle_request,3}
{beepbeep,process_request,4}
{test_web,loop,1}
{mochiweb_http,headers,4}
{proc_lib,init_p_do_apply,3}
Use pattern matching to assert that things work up to the call to thing:new/1:
ok = erlydb:start(mysql,Database),
ok = erlydb:code_gen(["thing.erl"],mysql),
You include only the stack trace, look at the exception message as well. I suspect that the error is that you get an 'undef' exception. But check that it is so. The first line in the stack trace indicates that it is a problem with calling thing:new/1 with ["name", "value"] as argument.
It is slightly odd that you show one clause of handle_request that is not calling home_controller:create/1 as per {home_controller,create,1} in the stack-trace. What do the other clauses in your handle_request/2 function look like?