I am using atomic update to update meta-data in a SOLR document collection. To do so, I use an external .json file where I record all document IDs in the collection and possible meta-data, and use the "set" command to commit requested updates. But I figured out that whenever the external file is larger than approx 8200 bytes / 220 lines, I get this error message :
"org.apache.solr.common.SolrException: Cannot parse provided JSON: Unexpected EOF: char=(EOF),position=8191 BEFORE=''"
This does'nt seem to be related to the actual content of the file (or possible missing parenthesis or other) as I reproduced it with different databases. Moreover, If I cut the external file into smaller, less than 8000 bytes, updates work perfectly. Has anyone an idea of where this could come from ?
The curl command to update the collection is as follow :
curl 'http://localhost:8983/solr/these/update/json?commit=true' -d #test5.json
The SOLR main configuration file is available after the post. I can provide the json update file if needed. I'm available for any further elements.
Thanks in advance for your help,
Barthélémy
<?xml version="1.0" encoding="UTF-8" ?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!--
This is a DEMO configuration highlighting elements
specifically needed to get this example running
such as libraries and request handler specifics.
It uses defaults or does not define most of production-level settings
such as various caches or auto-commit policies.
See Solr Reference Guide and other examples for
more details on a well configured solrconfig.xml
https://cwiki.apache.org/confluence/display/solr/The+Well-Configured+Solr+Instance
-->
<config>
<!-- Controls what version of Lucene various components of Solr
adhere to. Generally, you want to use the latest version to
get all bug fixes and improvements. It is highly recommended
that you fully re-index after changing this setting as it can
affect both how text is indexed and queried.
-->
<luceneMatchVersion>6.6.0</luceneMatchVersion>
<!-- Load Data Import Handler and Apache Tika (extraction) libraries -->
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-.*\.jar"/>
<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar"/>
<lib dir="${solr.install.dir:../../../..}/contrib/langid/lib" regex=".*\.jar"/>
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-.*\.jar"/>
<requestHandler name="/select" class="solr.SearchHandler">
<lst name="defaults">
<str name="echoParams">explicit</str>
<str name="df">text</str>
</lst>
</requestHandler>
<requestHandler name="/dataimport" class="solr.DataImportHandler">
<lst name="defaults">
<str name="config">tika-data-config.xml</str>
</lst>
</requestHandler>
<updateRequestProcessorChain name="langid" default="true" onError = "skip">
<processor class="org.apache.solr.update.processor.LangDetectLanguageIdentifierUpdateProcessorFactory"
onError = "continue">
<str name="langid.fl">text</str>
<str name="langid.langField">language_s</str>
<str name="langid.threshold">0.8</str>
<str name="langid.fallback">en</str>
</processor>
<processor class="solr.LogUpdateProcessorFactory" onError = "skip"/>
<processor class="solr.RunUpdateProcessorFactory" onError = "skip"/>
</updateRequestProcessorChain>
<!-- The default high-performance update handler -->
<updateHandler class="solr.DirectUpdateHandler2">
<!-- Enables a transaction log, used for real-time get, durability, and
and solr cloud replica recovery. The log can grow as big as
uncommitted changes to the index, so use of a hard autoCommit
is recommended (see below).
"dir" - the target directory for transaction logs, defaults to the
solr data directory. -->
<updateLog>
<str name="dir">${solr.ulog.dir:}</str>
</updateLog>
</updateHandler>
</config>
I don't know if this will solve it for anybody else that runs into this but I ran into this same issue.
My inital command looked like this:
curl http://localhost:8983/solr/your_solr_core/update?commit=true --data-binary #test5.json -H "Content-type:application/json"
Updating it to this solved the problem
curl http://localhost:8983/solr/your_solr_core/update?commit=true -H "Content-Type: application/json" -T "test5.json" -X POST
apparently it has something to do with curl loading the whole file into memory with the first command which causes issues, whereas the second command uses minimal memory.
try editing server/etc/jetty.xml and tweak requestHeaderSize:
<Set name="requestHeaderSize"><Property
name="solr.jetty.request.header.size" default="8192" /></Set>
to something larger than your file limit.
Related
This appeared for two larger requests, neither of which failed/errored, in a test case with a single user run.
However, this does not appear for the five-user run of the same test case.
I haven't been able to find any documentation on Apache regarding the appearance of infinity during test runs.
Has anyone faced this? If so, did you find a way to get the reporting tool list the true numeric value?
Example of "infinity" appearing in the statistics.json 1
If you have "Infinity" in the statistics.json it means that the relevant Sampler has failed somewhere somehow (it hasn't been executed for some reason).
The reason can be found in:
.jtl results file, take a look at "responseMessage" column
jmeter.log file
If you want to see where the values are coming from and how the statistics are being built and processed - increase JMeter's logging verbosity for the HTML Reporting Dashboard packages by adding the next line to log4j2.xml file:
<Logger name="org.apache.jmeter.report.dashboard" level="debug" />
The easiest way to reproduce the issue is just creating a "bad" request, for example adding a HTTP Request sampler like this:
it won't be executed because you cannot have :// characters in the DNS hostname and this transaction will be having "Infinity" values in the statistics.json
UPDATE: This is the code that kills functionality of firebase. If I try to download from firebase a couple of seconds after this code (waiting with await Task.Delay, no other code running), it starts sending the code -13000, httpResult = 0 exception. The same request earlier works. Map works.
GoogleMapFragment gmf = new GoogleMapFragment(context, this);
FragmentTransaction ft = activity.FragmentManager.BeginTransaction();
ft.Add(mapLayout.Id, gmf, "my_fragment");
ft.Commit();
I wanted to have google map on layout in the same activity where I work with firebase. Map works, but somehow it interferes with firebase, which work only before creating Google map. Any ideas what can cause this?
Update 2: If I download small file before initializing google maps, I can later use firebase, so I 'solved' the issue in a little dirty way but at least I can continue working. After this 'fix' I get following error in the output but file is downloaded anyway. I must continue digging, for now I hope the worst is over...
error getting token java.util.concurrent.ExecutionException: com.google.firebase.FirebaseApiNotAvailableException: firebase-auth is not linked, please fall back to unauthenticated mode.
Old version of question:
I checked all possible answers here on SO for my question, but nothing brought me to right way. There's quite obvious output telling me that something is wrong, but I have no idea how to solve the issue. There's an answer in this question that one of possible reasons for HttpResult = 0 is that google play version on phone isn't actual enough. I used the method recommended for check and I have Google Play services 11.5.18 installed on phone. I have Xamarin.Firebase.Storage 42.1021.1 (10.2.1) installed and using Visual Studio 2015. Quite often I had to clean and rebuild and it sometimes worked, but not this time. In android properties I have Compile using android version 7.1 Nougat. I created firebase account just recently, not knowing much about this, added it in google console to existing project (as I already use google maps), filled sha1 code the same way I did with maps. Added google-services.json and set it's build action on GoogleServiceJson. No more actions I know about.
Here is my code, I tried various ways to download, upload, but this one seems to be good example:
FirebaseApp fba=FirebaseApp.InitializeApp(context);
firebaseStorage = FirebaseStorage.Instance;
firebaseStorageReference = firebaseStorage.GetReferenceFromUrl("gs://alien-chess.appspot.com");
firebaseStorageReference=firebaseStorageReference.Child("settings.dat");
byte[] bytes = new byte[1000];
firebaseStorageReference.PutBytes(bytes).AddOnFailureListener(new FirebaseFailureListener(this));
Here is my manifest file
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="AlienChessAndroid.AlienChessAndroid" android:versionCode="1" android:versionName="1.0" android:installLocation="preferExternal">
<uses-sdk android:minSdkVersion="19" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="com.google.android.providers.gsf.permission.READ_GSERVICES" />
<uses-permission android:name="android.permission.WAKE_LOCK" />
<application android:label="Alien Chess" android:icon="#drawable/Alien" android:largeHeap="true">
<meta-data android:name="com.google.android.gms.version" android:value="#integer/google_play_services_version" />
<meta-data android:name="com.google.android.maps.v2.API_KEY" android:value="AIzaxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" />
</application>
And here are what I think are important parts from the output window
Failed to retrieve remote module version: V2 version check failed
Local module descriptor class for com.google.android.gms.firebasestorage not found.
Considering local module com.google.android.gms.firebasestorage:0 and remote module com.google.android.gms.firebasestorage:0
NetworkRequestFactoryProxy failed with a RemoteException:
com.google.android.gms.dynamite.DynamiteModule$zza: No acceptable module found. Local version is 0 and remote version is 0.
....
Unable to create a network request from metadata
android.os.RemoteException
....
StorageException has occurred.
An unknown error occurred, please check the HTTP result code and inner exception for server response.
Code: -13000 HttpResult: 0
There isn't much c# sources for visual studio and I can't read that easily recommendations for android studio as they are quite different for unskilled programmers.
Any ideas what other things should I check?
We have a Scala Play app and we are using the LogglyBatchAppender. But, all our environments (dev, staging, prod) logs are being mixed up on loggly. This says we can group by sources or hostnames or tags but hostname info is not being attached to outgoing loggly messages and this wiki page says nothing about how to attach tags in the LogglyBatchAppender (it does mention how to tag using the slower LogglyAppender). What is the best way to see different host/env logs in loggly if we are using the LogglyBatchAppender?
Hi you can set the endpointUrl in LogglyBatchAppender. When you set the URL make sure the include the tag at the end of it. You can tag them for dev/staging/prod. This way you can use Loggly's source groups.
Example with a prod tag. Remember to replace with your own customer token:
<configuration>
<appender name="logglyAppender" class="ch.qos.logback.ext.loggly.LogglyBatchAppender">
<endpointUrl>http://logs-01.loggly.com/inputs/YOUR-CUSTOMER-TOKEN/tag/prod/</endpointUrl>
...
</appender>
</configuration>
I am using Nlog with MYsql as target database.
MY configuration is as below:
<target name="databaselog" type="Database" keepConnection="true"
useTransactions="false"
dbProvider="MySql.Data.MySqlClient"
connectionString="Server=localhost;Database=****;User ID=****;Password=****;Connect Timeout=5;"
commandText=" insert into logs(time_stamp,logger,message,log_level) Values(#TIME_STAMP,#LOGGER,#MESSAGE,#LOGLEVEL)">
<parameter name="#TIME_STAMP" layout="${longdate}"/>
<parameter name="#LOGGER" layout="${logger}"/>
<parameter name="#MESSAGE" layout="${message}"/>
<parameter name="#LOGLEVEL" layout="${level:uppercase=true}"/>
</target>
.
Still not able to insert the info or any level message in MYSql DB.
Can anyone please help me out ?
bye the I also tried command text as
insert into logs(time_stamp,logger,message,log_level) Values(?,?,?,?)
but not able to insert the data in mysql db.
From NLog docs:
NLog is designed to swallow run-time exceptions that may result from logging. The following settings can change this behavior and/or redirect these messages.
<nlog throwExceptions="true" /> - adding the throwExceptions attribute in the config file causes NLog to stop masking exceptions and pass them to the calling application instead. This attribute is useful at deployment time to quickly locate any problems. It’s recommended to set throwExceptions to "false" as soon as the application is properly configured to run, so that any accidental logging problems won’t crash the application.
<nlog internalLogFile="file.txt" /> - adding internalLogFile causes NLog to write its internal debugging messages to the specified file. This includes any exceptions that may be thrown during logging.
<nlog internalLogLevel="Trace|Debug|Info|Warn|Error|Fatal" /> – determines the internal log level. The higher the level, the less verbose the internal log output.
<nlog internalLogToConsole="false|true" /> – determines whether internal logging messages are sent to the console.
<nlog internalLogToConsoleError="false|true" /> – determines whether internal logging messages are sent to the console error output (stderr).
Right click on NLog configuration file. Set value to "Copy always" of property "Copy to Output Directory"
How to include pom version number into Jenkins e-mail notification?
This is to notify test team about a sucessful build and the build version. For now, we can only send a generic e-mail without any useful content in it.
I have tried the following but none of those sucess.
grep and export in a post build step but I can't pass that into the e-mail notification plugin
(.*) annotation but it dosen't work for the plugin.
Anyone have any idea?
You may use Extended Email Notification plugin that can parse your build log using regular expressions.
When you install the plugin you first configure its default behavior on the main Jenkins configuration page. Then you customize it per job: go to Post-Build Actions and check 'Editable Email Notification' box. Use 'Content Token Reference' help button to get the tokens you may use. Among them will be BUILD_LOG_REGEX token with the explanation on its usage.
So what you may do is to output your POM via the build log in some easily parseable form and then parse it out using BUILD_LOG_REGEX into your e-mail.
Here's an actual test build (for Windows) that echoes boo_$BUILD_ID_foo line to the output, the plugin parses out that line and sends an email that looks like this:
Here we go, Joe:
boo_2012-01-30_23-04-29_foo
config.xml for the job:
<?xml version='1.0' encoding='UTF-8'?>
<project>
<actions/>
<description></description>
<keepDependencies>false</keepDependencies>
<properties/>
<scm class="hudson.scm.NullSCM"/>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers class="vector"/>
<concurrentBuild>false</concurrentBuild>
<builders>
<hudson.tasks.BatchFile>
<command>echo boo_%BUILD_ID%_foo
</command>
</hudson.tasks.BatchFile>
</builders>
<publishers>
<hudson.plugins.emailext.ExtendedEmailPublisher>
<recipientList>youemail#company.com</recipientList>
<configuredTriggers>
<hudson.plugins.emailext.plugins.trigger.FailureTrigger>
<email>
<recipientList></recipientList>
<subject>$PROJECT_DEFAULT_SUBJECT</subject>
<body>$PROJECT_DEFAULT_CONTENT</body>
<sendToDevelopers>false</sendToDevelopers>
<includeCulprits>false</includeCulprits>
<sendToRecipientList>true</sendToRecipientList>
</email>
</hudson.plugins.emailext.plugins.trigger.FailureTrigger>
<hudson.plugins.emailext.plugins.trigger.SuccessTrigger>
<email>
<recipientList></recipientList>
<subject>$PROJECT_DEFAULT_SUBJECT</subject>
<body>$PROJECT_DEFAULT_CONTENT</body>
<sendToDevelopers>false</sendToDevelopers>
<includeCulprits>false</includeCulprits>
<sendToRecipientList>true</sendToRecipientList>
</email>
</hudson.plugins.emailext.plugins.trigger.SuccessTrigger>
</configuredTriggers>
<contentType>text/plain</contentType>
<defaultSubject>$DEFAULT_SUBJECT</defaultSubject>
<defaultContent>Here we go, Joe:
${BUILD_LOG_REGEX, regex="^boo.*?foo.*?$",showTruncatedLines=false}
</defaultContent>
</hudson.plugins.emailext.ExtendedEmailPublisher>
</publishers>
<buildWrappers/>
</project>
Just use the following property:
${POM_VERSION}