how to insert new records in peer and extracting those records? - hyperledger-fabric-sdk-js

Getting error while fetching data from a peer in fabric and unable to resolve the issue.
Tried to insert new records in one of the peers which I have created and has entered the code, where I didn't receive any response and cursor is moved to next point and when I am fetching details of that new record I am getting error
Error: endorsement failure during query. response: status:500 message:"transaction returned with failure: Error: CAR10 does not exist"
I used following query to fetch the row details
peer chaincode query -c $channel NAME -n mycc -c '{"args":["querycar","CAR10"]}'
My task is to insert a new record into the peer and to fetch the record details and also the data should be inserted in all the peers which I have created.

The error simply says that there is not cars with id CAR10.
1.If you have just initialized the chaincode and directly queried it then the total no. of cars added to the ledger are 9
query with CAR9 as the key.
2.if not then add the CAR10 to the ledger using the createCar function in chaincode.
and then query it.

Related

Error while reading data, error message: CSV table references column position 15, but line starting at position:0 contains only 1 columns

I am new in bigquery, Here I am trying to load the Data in GCP BigQuery table which I have created manually, I have one bash file which contains bq load command -
bq load --source_format=CSV --field_delimiter=$(printf '\u0001') dataset_name.table_name gs://bucket-name/sample_file.csv
My CSV file contains multiple ROWS with 16 column - sample Row is
100563^3b9888^Buckname^https://www.settttt.ff/setlllll/buckkkkk-73d58581.html^Buckcherry^null^null^2019-12-14^23d74444^Reverb^Reading^Pennsylvania^United States^US^40.3356483^-75.9268747
Table schema -
When I am executing bash script file from cloud shell, I am getting following Error -
Waiting on bqjob_r10e3855fc60c6e88_0000016f42380943_1 ... (0s) Current status: DONE
BigQuery error in load operation: Error processing job 'project-name-
staging:bqjob_r10e3855fc60c6e88_0000ug00004521': Error while reading data, error message: CSV
table
encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the errors[] collection
for more details.
Failure details:
- gs://bucket-name/sample_file.csv: Error while
reading data, error message: CSV table references column position
15, but line starting at position:0 contains only 1 columns.
What would be the solution, Thanks in advance
You are trying to insert wrong values to your table per the schema you provided
Based on table schema and your data example I run this command:
./bq load --source_format=CSV --field_delimiter=$(printf '^') mydataset.testLoad /Users/tamirklein/data2.csv
1st error
Failure details:
- Error while reading data, error message: Could not parse '39b888'
as int for field Field2 (position 1) starting at location 0
At this point, I manually removed the b from 39b888 and now I get this
2nd error
Failure details:
- Error while reading data, error message: Could not parse
'14/12/2019' as date for field Field8 (position 7) starting at
location 0
At this point, I changed 14/12/2019 to 2019-12-14 which is BQ date format and now everything is ok
Upload complete.
Waiting on bqjob_r9cb3e4ef5ad596e_0000016f42abd4f6_1 ... (0s) Current status: DONE
You will need to clean your data before upload or use a data sample with more lines with --max_bad_records flag (Some of the lines will be ok and some not based on your data quality)
Note: unfortunately there is no way to control date format during the upload see this answer as a reference
We had the same problem while importing data from local to BigQuery. After researching the data we saw that there data which starting \r or \s enter image description here
After implementing ua['ColumnName'].str.strip() and ua['District'].str.rstrip(). we could add data to Bg.
Thanks

Progress SQL error in ssis package: buffer too small for generated record

I have an ssis package which uses SQL command to get data from Progress database. Every time I execute the query, it throws this specific error:
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function recPutLONG called from sts_srtt_t:::add_row on (ttbl# 4, len/maxlen/reqlen = 33/32/33) for . Save log for Progress technical support.
I am running the following query:
Select max(ROWID) as maxRowID from TableA
GROUP BY ColumnA,ColumnB,ColumnC,ColumnD
I've had the same error.
After change startup-parameter -SQLTempStorePageSize and -SQLTempStoreBuff to 24 and 3000 respectively the problem was solved.
I think, for you the values must be changed to 40 and 20000.
You can find more information here. The name of the parameter in that article was a bit different than in my Database, it depends on the Progress-version witch is used.

Timeout error when downloading large files with URL

when I try and download the datasets within the for loop I always get this error Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") : InternetOpenUrl failed: 'The operation timed out'
I have tried changing the timeout to different values 100,200,500,1000, but it doesn't seem to be changing as the error occurs after the same amount of time regardless of what value I set.
## Get catalog and select for dataset ids, also filtered out data sets that don't contain key words
catalog<-tbl_df(read.csv2("https://datasource.kapsarc.org/explore/download/"))
select(catalog,datasetid,country,data.classification,title,theme,keyword)%>%
filter(grepl("Demo|Econo|Trade|Industry|Policies|Residential",theme))%>%
select(datasetid)->datasets
data_kapsarc<-list()
base_url<-"https://datasource.kapsarc.org/api/v2/catalog/datasets/population-by-sex-and-age-group/exports/json?rows=-1&start=0&pretty=true&timezone=UTC"
options(timeout=1000)
##download data sets and store in list of dataframes
for (i in 1:length(datasets$datasetid)){
try({
url<-gsub("population-by-sex-and-age-group",datasets[i,1]$datasetid,base_url)
temp <- tempfile()
download.file(url,temp,mode='wb')
data_kapsarc[[i]]<-fromJSON(temp)
unlink(temp)
},silent=TRUE)
}

FireDac DApt 400

I use "XE6" and "MySQL" database. I have only one table. The primary key is auto increment. When I insert a record it generates an error: "[FireDac] [DApt] -400. Fetch command fecthed [0] instead of 1 record. Another user ".
My table has primary key and there is no other connection to the database. Anyone have any idea what that might be?
My code:
AbrirConexao; // Open connection
fQryTransacao.Close;
fQryTransacao.SQL.Text := 'SELECT * FROM APOLICE LIMIT 0'; // I dont use a script SQL. I load the structure of table. Then I enter the table.
fQryTransacao.Open;
fQryTransacao.Insert;
CarregarQuery(pApolice, fQryTransacao); // It takes the value of an object and assigns it in the "fields" of the query.
fQryTransacao.Post; // Here it generates the error.
EncerrarConexao; // Close connection
I used "TFDMoniFlatFileClientLink" to monitor the SQL commands. The generated LOG file has the generated "insert" command. If I copy and execute directly in "MySQL" does not generate error.
Tracing and Monitoring (FireDAC)

How to get the processes running the sql server

I am getting the deadlock problem in our database on a table .
I am getting the following error message :
3/13/2015 11:37:35 AM
System.Data.SqlClient.SqlException: Transaction (Process ID 143) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at clsdb.clsDB.execcmd(String strsql)
at clspllog.Clspllog.RunXPSPushliveResponse(String pushtype, String journalname, String strvol, String strissue, String articleid, String PushLiveResponseTime, Int32 plstatus, String plstatusmsg, String strerrmsg)
There is one method named RunXPSPushliveResponse using the table articleschedule . it is trying to update this table , but it got the above error message.
I am not able to know which is another process which is using this table.
thus not able to take any action.
Is there some way so I got the processes using this table or any other way . Totally blank with this issue. Any hint will be much appreciated. I am fresher, so don't have much ideas to rectify this.
Any help is appreciated.
You can use the following queries for getting the info mentioned in the log.
SQL Server:Check for all running process and Kill ?
You can also use Activity Monitor which can help you with all the activities in the server
Activity Monitor