What is causing this error when I call apoc.load.csv? - csv

CALL apoc.load.csv('FILE:///Neo4j_AttributeProvenance.csv',{sep:","})
The error is:
Failed to invoke procedure apoc.load.csv: Caused by:
java.lang.RuntimeException: Can't read CSV from URL
file:/Neo4j_AttributeProvenance.csv
I am able to read from this csv file using this statement:
LOAD CSV WITH HEADERS FROM "FILE:///Neo4j_AttributeProvenance.csv" AS CSVLine CREATE (q:car { })
I have already added this line to the neo4j.conf:
apoc.import.file.enabled=true

Related

Appflow Update_flow error : Destination object for the destination connector can not be updated

Anyone faced this error while calling update_flow for appflow?
errorMessage": "An error occurred (ValidationException) when calling the UpdateFlow operation: Update Flow request failed due to:[Destination object for the destination connector can not be updated]",
"errorType": "ValidationException",
"stackTrace": [
" File "/var/task/lambda_function.py", line 7, in lambda_handler\n response = client.update_flow (\n",
" File "/var/runtime/botocore/client.py", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File "/var/runtime/botocore/client.py", line 676, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
What could be the cause.
This is resolved. Appflow doesnt allow to change the destination folders using update_flow. you have to set it exact same as existing flow.

neo4j apoc.export.csv permission denied error

I am trying to use apoc.export.csv
My query is as follows:
call apoc.export.csv.query("MATCH (a) Return id(a) limit 5", "test.csv", {} )
but i get this error:
Failed to invoke procedure 'apoc.export.csv.query': Caused by: java.io.FileNotFoundException: test.csv (Permission denied)
I have the following set in neo4j.conf
apoc.export.file.enabled=true
dbms.security.procedures.whitelist=apoc.export.*
any hints would be appreciated
exporting to /tmp/filename worked!

Data Migration From SQL Server to MySQL

I am getting this Attribute Error : 'No Type' objectives has no attribute 'split' when i tried to migrate sql db to mysql db in mysql workbench
these are the Log details :
Starting...
Connect to source DBMS...
- Connecting to source...
Connecting to Mssql#sa...
Opening ODBC connection to Driver=sa;DATABASE=;UID=sa;PWD=XXXX...
Connected to Mssql# 11.0.2100.60
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 147, in connect
_connections[connection.__id__]["version"] = getServerVersion(connection)
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 174, in getServerVersion
ver_parts = [ int(part) for part in ver_string.split('.') ] + 4*[ 0 ]
AttributeError: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 174, in getServerVersion
ver_parts = [ int(part) for part in ver_string.split('.') ] + 4*[ 0 ]
AttributeError: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 65, in run
self.func()
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/migration_source_selection.py", line 406, in task_connect
raise e
SystemError: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
*** ERROR: Error during Connect to source DBMS: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 543, in update_status
task.run()
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 80, in run
raise e
SystemError: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
*** ERROR: Exception in task 'Connect to source DBMS': SystemError('AttributeError("\'NoneType\' object has no attribute \'split\'"): error calling Python module function DbMssqlRE.getServerVersion',)
Failed
i tried a solution (given below , taken from the link https://bugs.mysql.com/bug.php?id=66030&thanks=3&notify=195). but it didn't help. I still get the same error. Please help me.
solution:
// We'll need some help from you to diagnose this one. With a text editor, open the /Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py file
and around line 174 you'll find a line that looks like:
ver_string = execute_query(connection, "SELECT SERVERPROPERTY('ProductVersion')").fetchone()[0]
Change that to:
ver_string = execute_query(connection, "SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)").fetchone()[0]
Then save and retry. Thanks! //

How to configure rsyslog template for Exception error for remote logging?

I'm using rsyslog to ship logs to a remote Logstash server, and the Logstash on that service expects input data in a json format. How can I configure an rsyslog template to json-ify a exception. For example, I want to send the following exception as a single message.
2017-02-08 21:59:51,727 ERROR :localhost-startStop-1 [jdbc.sqlonly] 1. PreparedStatement.executeBatch() batching 1 statements:
1: insert into CR_CLUSTER_REGISTRY (Cluster_Name, Url, Update_Dttm, Node_Id) values ('customer', 'rmi://ip-10-53-123.123.eu-west-1.compute.internal:1199/2', '02/08/2017 21:59:51.639', '2')
java.sql.BatchUpdateException: [Teradata JDBC Driver] [TeraJDBC 15.00.00.35] [Error 1338] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. Details of the failure can be found in the exception chain that is accessible with getNextException.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:148)
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:137)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:272)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatch(TDPreparedStatement.java:2584)
at com.teradata.tal.qes.StatementProxy.executeBatch(StatementProxy.java:186)
at net.sf.log4jdbc.StatementSpy.executeBatch(StatementSpy.java:539)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028)
at com.teradata.tal.common.persistence.dao.SessionWrapper.flush(SessionWrapper.java:920)
at com.teradata.trm.common.persistence.dao.DaoImpl.save(DaoImpl.java:263)
at com.teradata.trm.common.service.AbstractService.save(AbstractService.java:509)
at com.teradata.trm.common.cluster.Cluster.init(Cluster.java:413)
at com.teradata.trm.common.cluster.NodeConfiguration.initialize(NodeConfiguration.java:182)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:73)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:30)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:97)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:324)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:929)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:467)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:385)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:284)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4973)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5467)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:632)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1247)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1898)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.00.00.35] [Error -2801] [SQLState 23000] Duplicate unique prime key error in CIM_META.CR_CLUSTER_REGISTRY.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:301)
at com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(ReceiveInitSubState.java:114)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(StatementReceiveState.java:311)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(StatementReceiveState.java:200)
at com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:137)
at com.teradata.jdbc.jdbc_4.statemachine.PreparedBatchStatementController.run(PreparedBatchStatementController.java:58)
at com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:387)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:252)
... 37 more
I have the following rsyslog configuration file. The startmsg.regex aims to "flag" the start of a new message when it sees the "YYYY-mm-dd" date format, and until it sees that format, it should treat any text following the date format as part of the current message.
input(type="imfile"
File="/usr/share/tomcat/dist/logs/trm-error.log*"
Facility="local3"
Tag="trm-error:"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}"
escapeLF="on"
)
if $programname == 'trm-error:' then {
action(
type="omfwd"
Target="10.53.234.234"
Port="5514"
Protocol="udp"
template="textLogTemplate"
)
stop
}
..and the following template.
# Template for non json logs, just sends the message wholesale with extra
# # furniture.
template(name="textLogTemplate" type="list") {
constant(value="{ ")
constant(value="\"type\":\"")
property(name="programname")
constant(value="\", ")
constant(value="\"host\":\"")
property(name="hostname")
constant(value="\", ")
constant(value="\"timestamp\":\"")
property(name="timestamp" dateFormat="rfc3339")
constant(value="\", ")
constant(value="\"#version\":\"1\", ")
constant(value="\"customer\":\"customer\", ")
constant(value="\"role\":\"app2\", ")
constant(value="\"sourcefile\":\"")
property(name="$!metadata!filename")
constant(value="\", ")
constant(value="\"message\":\"")
property(name="rawmsg" format="json")
constant(value="\"}\n")
}
However, Logstash complains about a "jsonparseerror" when it tries to parse the log as a json file. Any clues?
The rsyslog configuration files I'm using are correct, that is, Java exception log is indeed wrapped into a valid JSON file. However, Logstash is complaining about a _jsonparsefailure, so this problem is probably related to Logstash Ruby code, and not on the rsyslog side.

Exception while try to get jmeter directory

I would like to start jmeter load test via console but it's Data Driven Load Test, so I need to read some information from csv files. I found a solution to include into User Parameters row to get the path to the place where the script was launched:
${__BeanShell(newFile(org.apache.jmeter.gui.GuiPackage.getInstance().getTestPlanFile().toString()).getParent())}
but I got an error in logs:
2013/06/11 15:23:54 ERROR - jmeter.util.BeanShellInterpreter: Error
invoking bsh method: eval Sourced file: inline evaluation of:
``newFile(org.apache.jmeter.gui.GuiPackage.getInstance().getTestPlanFile().toStrin
. . . '' : Command not found: newFile( java.lang.String )
2013/06/11 15:23:54 WARN - jmeter.functions.BeanShell: Error running
BSH script org.apache.jorphan.util.JMeterException: Error invoking bsh
method: eval Sourced file: inline evaluation of:
``newFile(org.apache.jmeter.gui.GuiPackage.getInstance().getTestPlanFile().toStrin
. . . '' : Command not found: newFile( java.lang.String ) at
org.apache.jmeter.util.BeanShellInterpreter.bshInvoke(BeanShellInterpreter.java:192)
What's wrong with this method?
The issue is you have:
newFile
instead of:
new File