I need to rotate or backup the current log file on startup but can't work out how to do it.
I have a typical log config:
appenders:
- type: console
threshold: INFO
- type: file
threshold: DEBUG
logFormat: "%-6level [%d{HH:mm:ss.SSS}] [%t] %logger{5} - %X{code} %msg %n"
currentLogFilename: /var/log/dw-service.log
archive: true
archivedLogFilenamePattern: /var/log/tmp/dw-service-%d{yyyy-MM-dd}-%i.log.gz
archivedFileCount: 3
timeZone: UTC
maxFileSize: 10MB
Ideally I would like to cause an archive of the logfile immediately on startup but failing that, I would settle for getting a handle to the appender and copying the file.
I can't work out how to do either. Please could somebody offer any help?
I've got this far:
private void backupLogFile() {
ch.qos.logback.classic.Logger log = (ch.qos.logback.classic.Logger)LoggerFactory.getLogger(LOG.ROOT_LOGGER_NAME);
Iterator<Appender<ILoggingEvent>> itr = log.iteratorForAppenders();
List<Appender<ILoggingEvent>> appenders = new LinkedList<Appender<ILoggingEvent>>();
while (itr.hasNext()) {
appenders.add(itr.next());
}
for(int i=0;i<appenders.size();++i){
if(appenders.get(i).getName().equals("async-file-appender"))
{
LOG.info("FOUND FILE APPENDER");
FileAppender myFile = (FileAppender)appenders.get(i);
String filename = myFile.getFile();
}
}
}
At runtime, Java tells me
Exception in thread "main" java.lang.ClassCastException: ch.qos.logback.classic.AsyncAppender incompatible with ch.qos.logback.core.FileAppender
I can't work out how to get the configured currentLogFilename, anybody know how do this?
Thanks
Related
Good day!
I have built a small IoT device that monitors the conditions inside a specific enclosure using an ESP32 and a couple of sensors. I want to monitor that data by publishing it to the ThingSpeak cloud, then writing it to InfluxDB with Telegraf and finally using the InfluxDB data source in Grafana to visualize it.
So far I have made everything work flawlessly, but with one small exception.
Which is: One of the plugins in my telegraf config fails with the error:
parsing metrics failed: Unable to convert field 'temperature' to type int: strconv.ParseInt: parsing "15.4": invalid syntax
The plugins are [inputs.http]] and [[inputs.http.json_v2]] and what I am doing with them is authenticating against my ThingSpeak API and parsing the json output of my fields. Then in my /etc/telegraf/telegraf.conf under [[inputs.http.json_v2.field]] I have added type = int as otherwise telegraf writes my metrics as Strings in InfluxDB and the only way to visualize them is using either a table or a single stat, because the rest of the flux queries fail with the error unsupported input type for mean aggregate: string. However, when I change to type = float in the config file I get a different error:
unprocessable entity: failure writing points to database: partial write: field type conflict: input field "temperature" on measurement "sensorData" is type float, already exists as type string dropped=1
I have a suspicion that I have misconfigured the parser plugin, however after hours of debugging I couldn't come up with a solution.
Some information that might be of use:
Telegraf version: Telegraf 1.24.2
Influxdb version: InfluxDB v2.4.0
Please see below for my telegraf.conf as well as the error messages.
Any help would be highly appreciated! (:
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 1000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "XXXXXXXX"
organization = "XXXXXXXXX"
bucket = "sensor"
[[inputs.http]]
urls = [
"https://api.thingspeak.com/channels/XXXXX/feeds.json?api_key=XXXXXXXXXX&results=2"
]
name_override = "sensorData"
tagexclude = ["url", "host"]
data_format = "json_v2"
## HTTP method
method = "GET"
[[inputs.http.json_v2]]
[[inputs.http.json_v2.field]]
path = "feeds.1.field1"
rename = "temperature"
type = "int" #Error message 1
#type = "float" #Error message 2
Error when type = "float":
me#myserver:/etc/telegraf$ telegraf -config telegraf.conf --debug
2022-10-16T00:31:43Z I! Starting Telegraf 1.24.2
2022-10-16T00:31:43Z I! Available plugins: 222 inputs, 9 aggregators, 26 processors, 20
parsers, 57 outputs
2022-10-16T00:31:43Z I! Loaded inputs: http
2022-10-16T00:31:43Z I! Loaded aggregators:
2022-10-16T00:31:43Z I! Loaded processors:
2022-10-16T00:31:43Z I! Loaded outputs: influxdb_v2
2022-10-16T00:31:43Z I! Tags enabled: host=myserver
2022-10-16T00:31:43Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"myserver",
Flush Interval:10s
2022-10-16T00:31:43Z D! [agent] Initializing plugins
2022-10-16T00:31:43Z D! [agent] Connecting outputs
2022-10-16T00:31:43Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2022-10-16T00:31:43Z D! [agent] Successfully connected to outputs.influxdb_v2
2022-10-16T00:31:43Z D! [agent] Starting service inputs
2022-10-16T00:31:53Z E! [outputs.influxdb_v2] Failed to write metric to sensor (will be
dropped: 422 Unprocessable Entity): unprocessable entity: failure writing points to
database: partial write: field type conflict: input field "temperature" on measurement
"sensorData" is type float, already exists as type string dropped=1
2022-10-16T00:31:53Z D! [outputs.influxdb_v2] Wrote batch of 1 metrics in 8.9558ms
2022-10-16T00:31:53Z D! [outputs.influxdb_v2] Buffer fullness: 0 / 10000 metrics
Error when type = "int"
me#myserver:/etc/telegraf$ telegraf -config telegraf.conf --debug
2022-10-16T00:37:05Z I! Starting Telegraf 1.24.2
2022-10-16T00:37:05Z I! Available plugins: 222 inputs, 9 aggregators, 26 processors, 20
parsers, 57 outputs
2022-10-16T00:37:05Z I! Loaded inputs: http
2022-10-16T00:37:05Z I! Loaded aggregators:
2022-10-16T00:37:05Z I! Loaded processors:
2022-10-16T00:37:05Z I! Loaded outputs: influxdb_v2
2022-10-16T00:37:05Z I! Tags enabled: host=myserver
2022-10-16T00:37:05Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"myserver",
Flush Interval:10s
2022-10-16T00:37:05Z D! [agent] Initializing plugins
2022-10-16T00:37:05Z D! [agent] Connecting outputs
2022-10-16T00:37:05Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2022-10-16T00:37:05Z D! [agent] Successfully connected to outputs.influxdb_v2
2022-10-16T00:37:05Z D! [agent] Starting service inputs
2022-10-16T00:37:10Z E! [inputs.http] Error in plugin:
[url=https://api.thingspeak.com/channels/XXXXXX/feeds.json?
api_key=XXXXXXX&results=2]: parsing metrics failed: Unable to convert field
'temperature' to type int: strconv.ParseInt: parsing "15.3": invalid syntax
Fixed it by leaving type = float under [[inputs.http.json_v2.field]] in telegraf.conf and creating a NEW bucket with a new API key in Influx.
The issue was that the bucket sensor that I had previously defined in my telegraf.conf already had the field temperature created in my influx database from previous tries with its type set as last (aka: String) which could not be overwritten with the new type mean (aka: float).
As soon as I deleted all pre existing buckets everything started working as expected.
InfluxDB dashboard
My AppPackage fails to load, and I'm unable to find the exact answer in the documentation or by the error message/code.
I tested the bundle by unzipping it into the "C:\Program Files\Autodesk\ApplicationPlugins" on my local machine, and it runs/loads as expected.
The AppPackage indicates that it is created successfully, which I'm sure is the most up-to-date version.
The addin is a .NET DLL file.
Error Report Message
[02/15/2019 18:44:48] Starting work item ffbcfc1ca50546fc9a6372424b2cdae1
[02/15/2019 18:44:48] Start download phase.
[02/15/2019 18:44:48] Start downloading file <CENSORED>.
[02/15/2019 18:44:48] Start preparing AppPackage <CENSORED>.
[02/15/2019 18:44:48] Download bits and install app to local cache.
[02/15/2019 18:44:48] End downloading file <CENSORED>.
[02/15/2019 18:44:48] End download phase.
[02/15/2019 18:44:48] Error: Failed to prepare app package(s).
[02/15/2019 18:44:48] Error: An unexpected error happened during phase Downloading of job.
[02/15/2019 18:44:48] Job finished with result FailedEnvironmentSetup
PackageContents.XML
<?xml version="1.0" encoding="utf-8" ?>
<ApplicationPackage SchemaVersion="1.0" AutodeskProduct="AutoCAD"
AppVersion="0.1.0"
ProductType="Application"
Name="CENSORED"
Description="CENSORED"
Author="CENSORED"
FriendlyVersion="0.1.0"
ProductCode="{CENSORED}"
UpgradeCode="{CENSORED}"
Helpfile="./help.html"
Icon="./my-icon.jpeg">
<CompanyDetails Name="CENSORED" Phone="CENSORED" Email="CENSORED"/>
<Components>
<RuntimeRequirements SeriesMin="R22.0" Platform="AutoCAD*" OS="Win64"/>
<ComponentEntry AppName="CENSORED" Version="0.1.0" ModuleName="./CENSORED.dll" AppType=".Net"
AppDescription="CENSORED" LoadOnAutoCADStartup="True">
</ComponentEntry>
</Components>
</ApplicationPackage>
Activity Definition:
Note I had to manually expand some inline functions here, since I have this broken into multiple parts. If I have a typo, rest assured the code actually runs syntactically.
let activity = <CreateActivityRequest>{
Id: id,
Version: 1,
IsPublic: false,
AppPackages: ['PACKAGE_NAME'],
Instruction: {Script: 'D6 '},
RequiredEngineVersion: '22.0',
Parameters: {
InputParameters: [{Name: 'HostDwg', LocalFileName: '$(HostDwg)'}],
OutputParameters: [{Name: 'output', LocalFileName: `output.json`}]
},
HostApplication: undefined,
AllowedChildProcesses: []
};
Entry from AppPackages Listing:
{
References: [],
Resource: '...',
RequiredEngineVersion: '22.0',
IsPublic: false,
IsObjectEnabler: false,
Version: 1,
Timestamp: '2019-02-15T19:32:33.527Z',
Description: '',
Id: 'CENSORED'
},
Make sure to double check how you zipped the AppPackage you uploaded. If you look inside your zip file, make sure you have a folder with the name PACKAGE_NAME.bundle and the PackageContents.XML file is inside that PACKAGE_NAME.bundle folder.
I am at the end of my tether with Log4J2, hopefully somebody can help. I have the following code to initialize Log4J2, pretty soon after startup:
try (InputStream configStream = new ByteArrayInputStream(writer.toString().getBytes("UTF-8"))) {
ConfigurationSource configurationSource = new ConfigurationSource(configStream);
Configurator.initialize(null, configurationSource);
}
Where writer is a StringWriter and toString() produces the following config (which I have validated is correct through other means):
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Appenders>
<Console name="C" target="SYSTEM_OUT">
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p %m%n"/>
</Console>
<RollingFile name="R" fileName="C:\temp\logfile.log" filePattern="C:\temp\logfile.log.%d{yyyy-MM-dd}">
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%t] %m%n"/>
<Policies>
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<Logger name="somename" level="debug">
<appender-ref ref="R"/>
</Logger>
<Root level="debug" additivity="false">
<appender-ref ref="C" level="info"/>
</Root>
</Loggers>
</Configuration>
As you may have guessed, this does not work and I get no error message other than the expected:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
The reason I say this is expected is because I am configuring Log4J2 manually and have not suppressed this message yet.
Unfortunately, I cannot read the config from a file, for legacy reasons.
UPDATE 1 :
After taking Remko's advice, I added the following block before invoking the initialize method:
System.setProperty("log4j2.disable.jmx", "true");
StatusLogger status = StatusLogger.getLogger();
status.clear(); // remove old listeners that may prevent status output
status.setLevel(Level.TRACE);
status.reset(); // I could not see any trace info until I called this
status.trace("Status -- TRACE"); // I added this to prove that trace level logging was working
This gave me the following output:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
TRACE StatusLogger Status -- TRACE
DEBUG StatusLogger Stopping LoggerContext[name=sun.misc.Launcher$AppClassLoader#647e05, org.apache.logging.log4j.core.LoggerContext#a3defe]
DEBUG StatusLogger Stopping LoggerContext[name=sun.misc.Launcher$AppClassLoader#647e05, org.apache.logging.log4j.core.LoggerContext#a3defe]...
DEBUG StatusLogger Unregistering MBean org.apache.logging.log4j2:type=sun.misc.Launcher$AppClassLoader#647e05
DEBUG StatusLogger Unregistering MBean org.apache.logging.log4j2:type=sun.misc.Launcher$AppClassLoader#647e05,component=StatusLogger
DEBUG StatusLogger Unregistering MBean org.apache.logging.log4j2:type=sun.misc.Launcher$AppClassLoader#647e05,component=ContextSelector
DEBUG StatusLogger Unregistering MBean org.apache.logging.log4j2:type=sun.misc.Launcher$AppClassLoader#647e05,component=Appenders,name=Console
TRACE StatusLogger Stopping org.apache.logging.log4j.core.config.DefaultConfiguration#1a1440e...
TRACE StatusLogger AbstractConfiguration stopped 0 AsyncLoggerConfigs.
TRACE StatusLogger AbstractConfiguration stopped 0 AsyncAppenders.
DEBUG StatusLogger Shutting down OutputStreamManager SYSTEM_OUT
TRACE StatusLogger AbstractConfiguration stopped 1 Appenders.
TRACE StatusLogger AbstractConfiguration stopped 0 Loggers.
DEBUG StatusLogger Stopped org.apache.logging.log4j.core.config.DefaultConfiguration#9a6398 OK
DEBUG StatusLogger Stopped LoggerContext[name=sun.misc.Launcher$AppClassLoader#647e05, org.apache.logging.log4j.core.LoggerContext#9a6398]...
UPDATE 2 :
I decided to figure out a way to work with a file rather than a ByteArrayInputStream and got it working. FWIW, I think there is a bug in the Log4J2 code, when attempting to initialize using an InputStream, my theory:
In Log4jContextFactory the following method:
public LoggerContext getContext(final String fqcn, final ClassLoader loader, final Object externalContext,
final boolean currentContext, final ConfigurationSource source)
Has the following if statement, which always evaluates to false, which means the default config is always returned...
if (ctx.getState() == LifeCycle.State.INITIALIZED) {
if (source != null) {
ContextAnchor.THREAD_CONTEXT.set(ctx);
final Configuration config = ConfigurationFactory.getInstance().getConfiguration(source);
LOGGER.debug("Starting LoggerContext[name={}] from configuration {}", ctx.getName(), source);
ctx.start(config);
ContextAnchor.THREAD_CONTEXT.remove();
} else {
ctx.start();
}
}
At first glance I don't see why your configuration does not work. (You could try using forward slashes in the paths to make absolutely sure, but chances are that the slashes are not the problem.)
Can you try the following to generate more log4j2 debug output to see where the configuration goes wrong? Please post the result in your question.
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.status.StatusLogger;
// In an XML configuration you can just use <Configuration status="TRACE"...
// Here we use less elegant code to switch on status logging
// since we're not sure where things break down.
System.setProperty("log4j2.disable.jmx", "true");
StatusLogger status = StatusLogger.getLogger();
status.clear(); // remove old listeners that may prevent status output
status.setLevel(Level.TRACE);
// now configure log4j2...
// This should generate trace-level debug output to the console.
try (InputStream configStream = new ByteArrayInputStream(writer.toString().getBytes())) {
ConfigurationSource configurationSource = new ConfigurationSource(configStream);
Configurator.initialize(null, configurationSource);
}
I have the following code in java to query SPARQL query over the Backend DB (postgreSQL).
import rdfProcessing.RDFRepository;
import java.io.File;
import java.sql.Connection;
import java.sql.DriverManager;
import java.util.List;
import org.openrdf.query.QueryLanguage;
import org.openrdf.query.TupleQueryResult;
import org.openrdf.repository.Repository;
import org.openrdf.repository.RepositoryConnection;
import org.openrdf.repository.manager.LocalRepositoryManager;
import org.openrdf.repository.manager.RepositoryManager;
import org.openrdf.sail.config.SailImplConfig;
import org.openrdf.sail.memory.config.MemoryStoreConfig;
import org.openrdf.repository.config.RepositoryImplConfig;
import org.openrdf.repository.sail.config.SailRepositoryConfig;
import org.openrdf.repository.config.RepositoryConfig;
public class Qeryrdf {
Connection connection;
private static final String REPO_ID = "C:\\RDF_triples\\univData10m\\repositories\\SYSTEM\\memorystore.data";
private static final String q1 = ""
+ "PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>" +
"PREFIX ub:<http://univ.org#>" +
"PREFIX owl:<http://www.w3.org/2002/07/owl#>" +
"PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>" +
" select distinct ?o ?p where"+
"{ ?s rdf:type ?o." +
"}";
public static void main(String[] args)
throws Exception {
LocalRepositoryManager manager = new LocalRepositoryManager(new File("C:\\RDF triples\\univData1"));
manager.initialize();
try {
Qeryrdf queryrdf = new Qeryrdf();
queryrdf.executeQueries(manager);
} finally {
manager.shutDown();
}
}
private void executeQueries(RepositoryManager manager)
throws Exception {
SailImplConfig backendConfig = new MemoryStoreConfig();
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
String repositoryId = REPO_ID;
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
manager.addRepositoryConfig(repConfig);
Repository repo = manager.getRepository(repositoryId);
repo.initialize();
RepositoryConnection con = repo.getConnection();
RDFRepository repository = new RDFRepository();
String repoDir = "C:\\RDF triples\\univData1" ;
repository.initializeRepository(repoDir );
System.out.println("Executing the query");
executeQuery(q1, con);
con.close();
repo.shutDown();
}
private void executeQuery(String query, RepositoryConnection con) {
getConnection();
try {
TupleQueryResult result = con.prepareTupleQuery(QueryLanguage.SPARQL, query).evaluate();
int resultCount = 0;
long time = System.currentTimeMillis();
while (result.hasNext()) {
result.next();
resultCount++;
}
time = System.currentTimeMillis() - time;
System.out.printf("Result count: %d in %fs.\n", resultCount, time / 1000.0);
} catch (Exception e) {
e.printStackTrace();
}
}
public void getConnection() {
try {
Class.forName("org.postgresql.Driver");
connection = DriverManager.getConnection(
"jdbc:postgresql://localhost:5432/myDB01", "postgres",
"aabbcc");
} catch (Exception e) {
e.printStackTrace();
System.err.println(e.getClass().getName() + ": " + e.getMessage());
System.exit(0);
}
System.out.println("The database opened successfully");
}
}
And I got the following result:
16:46:44.546 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Initializing MemoryStore...
16:46:44.578 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Reading data from C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data...
16:46:44.671 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Data file read successfully
16:46:44.671 [main] DEBUG org.openrdf.sail.memory.MemoryStore - MemoryStore initialized
16:46:44.765 [main] DEBUG org.openrdf.sail.memory.MemoryStore - syncing data to file...
16:46:44.796 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Data synced to file
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - React to commit on SystemRepository for contexts [_:node18j9mufr0x1]
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Processing modified context _:node18j9mufr0x1.
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Is _:node18j9mufr0x1 a repository config context?
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Reacting to modified repository config for C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data
16:46:44.796 [main] DEBUG o.o.r.manager.LocalRepositoryManager - Modified repository C:\RDF triples\univData1\repositories\SYSTEM\memorystore.data has not been initialized, skipping...
16:46:44.812 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.contextaware.config.ContextAwareFactory
16:46:44.812 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.dataset.config.DatasetRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.http.config.HTTPRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sail.config.SailRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sail.config.ProxyRepositoryFactory
16:46:44.843 [main] DEBUG o.o.r.config.RepositoryRegistry - Registered service class org.openrdf.repository.sparql.config.SPARQLRepositoryFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.federation.config.FederationFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.ForwardChainingRDFSInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.DirectTypeHierarchyInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.inferencer.fc.config.CustomGraphQueryInferencerFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.memory.config.MemoryStoreFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.nativerdf.config.NativeStoreFactory
16:46:44.859 [main] DEBUG org.openrdf.sail.config.SailRegistry - Registered service class org.openrdf.sail.rdbms.config.RdbmsStoreFactory
16:46:44.875 [main] DEBUG org.openrdf.sail.memory.MemoryStore - Initializing MemoryStore...
16:46:44.875 [main] DEBUG org.openrdf.sail.memory.MemoryStore - MemoryStore initialized
16:46:44.876 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - Initializing NativeStore...
16:46:44.876 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - Data dir is C:\RDF triples\univData1
16:46:44.970 [main] DEBUG o.openrdf.sail.nativerdf.NativeStore - NativeStore initialized
Executing the query
The database opened successfully
16:46:45.735 [main] DEBUG o.o.query.parser.QueryParserRegistry - Registered service class org.openrdf.query.parser.serql.SeRQLParserFactory
16:46:45.751 [main] DEBUG o.o.query.parser.QueryParserRegistry - Registered service class org.openrdf.query.parser.sparql.SPARQLParserFactory
Result count: 0 in 0.000000s.
My problem is:
1. I changed the SPARQL query many times but still retrieving 0 rows.
2. So, Does OpenRDF Sesame connect to backend DB like PostgreSQL, MySQL, etc?
3. If so, Does OpenRDF Sesame translate SPARQL query to SQL then bring results from the backend DB?
Thanks in Advance.
First, answers to your specific questions, in order:
if the query gives no results, that means that either the repository over which you're executing it is empty, or the query you're trying to execute matches no data in that repository. Since it looks like the way in which you set up and initialize your repository is completely wrong (see remarks below), it is probably empty.
in general, yes, Sesame can connect to a PostgreSQL or MySQL database for storage and query. However, in your code this is not done, because you are not using a Sesame RDBMSStore as your SAIL storage backend, but are using a MemoryStore (which, as the name implies, is an in-memory database).
If you were using a Sesame PostgreSQL/MySQL store, then yes, it would translate SPARQL queries to SQL queries. But you're not using it. Also, the Sesame PostgreSQL/MySQL support is now deprecated - it's recommended not to use it, but instead a NativeStore or MemoryStore or any one of the many available third-party Sesame store implementations .
More generally, looking at your code, it is unclear what you're trying to accomplish, and I cannot believe your code actually compiles, let alone runs.
You're using a class RDFRepository in there somewhere, which doesn't exist in Sesame 2, and a method initializeRepository which you give a directory, which also does not exist. It looks vaguely like how things worked in Sesame 1, but that version of Sesame has been out commission for at least 6 years now.
Then you have a method getConnection which sets up a connection to a PostgreSQL database, but that method doesn't accomplish anything - it just creates a Connection object but then nothing is ever done with that Connection.
I recommend that you go back to basics and have a good look through the Sesame documentation, especially the tutorial, and the chapter on Programming with Sesame, which explains how to create and manage repositories and how to work with them.
When my Grails application crashes, it shows the error and the stacktrace on the error page because the error.gsp page has the following snippet <g:renderException exception="${exception}" />. However nothing gets logged in the log file.
How can I change this? because for the production application I plan to remove the renderException because I don't want users to see the entire stacktrace.
My log4j settings are as follows:
appenders {
rollingFile name:'catalinaOut', maxFileSize:1024, fileName:"${System.properties.getProperty('catalina.home')}/logs/mylog.log"
}
root {
error 'catalinaOut'
debug 'catalinaOut'
additivity = true
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate',
'grails.app'
debug 'grails.app'
}
I'm running the app in development as grails run-app
I use these settings for console and file based logging. You can remove stdout if you don't want/need console. Just copy all your error classes in the corresponding list.
log4j = {
def loggerPattern = '%d %-5p >> %m%n'
def errorClasses = [] // add more classes if needed
def infoClasses = ['grails.app.controllers.myController'] // add more classes if needed
def debugClasses = [] // add more classes if needed
appenders {
console name:'stdout', layout:pattern(conversionPattern: loggerPattern)
rollingFile name: "file", maxFileSize: 1024, file: "./tmp/logs/logger.log", layout:pattern(conversionPattern: loggerPattern)
}
error stdout: errorClasses, file: errorClasses
info stdout: infoClasses, file: infoClasses
debug stdout: debugClasses, file: debugClasses
}