JMeter throwing 'File never reserved' error - csv

While running the JMeter for data driven testing, JMeter is thrown with the error below. A big chunk of data was not pulled in. CSV file has about 1500 email ids and would like to ramp up the test to a million. However, the test is failing at the first attempt itself.
Any thoughts, please. Appreciate your help.
2019-03-19 23:55:08,169 ERROR o.a.j.c.CSVDataSet: java.io.IOException: File never reserved: C:\Users\sp\Desktop\STM\STM-JMeterTests\emailAdd.csv
2019-03-19 23:55:08,572 INFO o.a.j.t.JMeterThread: Thread is done: emailAdd 1-5
2019-03-19 23:55:08,572 INFO o.a.j.t.JMeterThread: Thread finished: emailAdd 1-5
2019-03-19 23:55:08,594 ERROR o.a.j.c.CSVDataSet: java.io.IOException: File never reserved: C:\Users\sp\Desktop\STM\STM-JMeterTests\emailAdd.csv
2019-03-19 23:55:08,595 ERROR o.a.j.c.CSVDataSet: java.io.IOException: File never reserved: C:\Users\sp\Desktop\STM\STM-JMeterTests\emailAdd.csv
2019-03-19 23:55:08,607 INFO o.a.j.t.JMeterThread: Thread is done: emailAdd 1-4

Related

GitHub Action actions/upload-artifact#v3 is freezing

Starting today the uploat artifact part of out Action is failing with this:
Container for artifact "shuttle_23.1.19.293.zip" successfully created. Starting upload of file(s)
Total file count: 1 ---- Processed file #0 (0.0%)
Total file count: 1 ---- Processed file #0 (0.0%)
Total file count: 1 ---- Processed file #0 (0.0%)
Total file count: 1 ---- Processed file #0 (0.0%)
An error has been caught http-client index 0, retrying the upload
Error: write ECONNRESET
at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:98:16) {
errno: -4077,
code: 'ECONNRESET',
syscall: 'write'
}
Exponential backoff for retry #1. Waiting for 6623 milliseconds before continuing the upload at offset 0
Finished backoff for retry #1, continuing with upload
No idea what this means. And is sometimes DOES work, but mostly does not.
Anyone else ever see this? I expect upload-artifact#v3 is a pretty common action.

Error timeout when I want to read file from hadoop with pyspark

I want to read a csv file from hadoop with Pyspark with the following code:
dfcsv = spark.read.csv("hdfs://my_hadoop_cluster_ip:9000/user/root/input/test.csv")
dfcsv.printSchema()
My cluster hadoop is on a Docker container on my local machine and link with two other slave container for the workers.
As you see in this picture from my ui hadoop cluster, the path is the right path.
But when I submit my script with this command :
spark-submit --master spark://my_cluster_spark_ip:7077 test.py
My script stuck on the read, and after few minutes I have this following error :
22/02/09 15:42:29 WARN TaskSetManager: Lost task 0.1 in stage 4.0 (TID 4) (my_slave_spark_ip executor 1): org.apache.hadoop.net.ConnectTimeoutException: Call From spark-slave1/my_slave_spark_ip to my_hadoop_cluster_ip:9000 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=my_hadoop_cluster_ip/my_hadoop_cluster_ip:9000]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:751)
...
For information, my csv file is very small, just 3 lines and 64 KB.
Have you any solution to fix this issue?

Py4JError while converting csv file to parquet using jupyter-notebook

I want to convert a csv to parquet file using jupyter notebook, python3. However, i get the next error:
Py4JJavaError Traceback (most recent call last)
Py4JJavaError: An error occurred while calling o40.parquet.
: org.apache.spark.SparkException: Job aborted.
at ...…...…..
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows.
at
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
Caused by: java.net.SocketException: Connection reset by peer: socket write error
How can I resolve it please?
Make sure you have hadoop binaries available and HADOOP_HOME is set
If not download them from here
Then set HADOOP_HOME
import os
os.environ['HADOOP_HOME']=r"C:\hadoop-2.7.1"
os.environ["JAVA_HOME"] = r"C:\Program Files\Java\jdk1.8.0_212"
Then save the file

Chrome Browser errors and warning in debug.log file

[0111/073306:ERROR:exception_snapshot_win.cc(87)] thread ID 3020 not found in process
[0111/073306:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
[0111/073307:ERROR:exception_snapshot_win.cc(87)] thread ID 3812 not found in process
[0111/073307:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
[0111/073307:ERROR:exception_snapshot_win.cc(87)] thread ID 1424 not found in process
[0111/073307:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
[0111/073307:ERROR:exception_snapshot_win.cc(87)] thread ID 968 not found in process
[0111/073307:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
[0111/073307:ERROR:exception_snapshot_win.cc(87)] thread ID 1668 not found in process
[0111/073307:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
[0111/073307:ERROR:exception_snapshot_win.cc(87)] thread ID 600 not found in process
[0111/073307:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
[0111/073307:ERROR:exception_snapshot_win.cc(87)] thread ID 1904 not found in process
[0111/073307:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
[0111/073307:ERROR:exception_snapshot_win.cc(87)] thread ID 3860 not found in process
[0111/073307:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
[0111/073307:ERROR:exception_snapshot_win.cc(87)] thread ID 3844 not found in process
[0111/073307:WARNING:crash_report_exception_handler.cc(61)] ProcessSnapshotWin::Initialize failed
I received the above in a debug.log file when taking a look at a process that runs in chrome and had failed. I use the Tampermonkey browser extension to run code to log in to a web page and start a process. The web page then remains open and the process continues to run. If the web page is disrupted, the process ends.
I am using the most up to date version of the browser extension Tampermonkey. I am not sure what these log files are telling me. Anyone with experience know what these are saying? I do not believe it to be an issue with the code that is used to start the process, but rather, something crashed in Chrome.
Chrome version: Version 55.0.2883.87 m

Using DBOutputFormat to write data to Mysql causes IOException

Recently, I am learning MapReduce and use it to write data to MySQL database. There are two ways to do so, DBOutputFormat and SQOOP. I tried the first one (refer to here), but encountered a problem, following is the error:
...
16/05/25 09:36:53 INFO mapred.LocalJobRunner: 3 / 3 copied.
16/05/25 09:36:53 INFO mapred.LocalJobRunner: reduce task executor complete.
16/05/25 09:36:53 WARN output.FileOutputCommitter: Output Path is null in cleanupJob()
16/05/25 09:36:53 WARN mapred.LocalJobRunner: job_local1404930626_0001
java.lang.Exception: java.io.IOException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.io.IOException
at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:185)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:540)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/05/25 09:36:54 INFO mapreduce.Job: Job job_local1404930626_0001 failed with state FAILED due to: NA
16/05/25 09:36:54 INFO mapreduce.Job: Counters: 38
File System Counters
FILE: Number of bytes read=32583
FILE: Number of bytes written=796446
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=402
HDFS: Number of bytes written=0
HDFS: Number of read operations=18
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
...
while I manually use JDBC to connect and insert data, it turns out to be successful. And I notice that the map/reduce task executors are complete, but it encounters the IOException. So I guess the problem is database-related.
My code is here. Appriciated if some one could help me to figure out what is the problem.
Thanks in advance!