when I try and download the datasets within the for loop I always get this error Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") : InternetOpenUrl failed: 'The operation timed out'
I have tried changing the timeout to different values 100,200,500,1000, but it doesn't seem to be changing as the error occurs after the same amount of time regardless of what value I set.
## Get catalog and select for dataset ids, also filtered out data sets that don't contain key words
catalog<-tbl_df(read.csv2("https://datasource.kapsarc.org/explore/download/"))
select(catalog,datasetid,country,data.classification,title,theme,keyword)%>%
filter(grepl("Demo|Econo|Trade|Industry|Policies|Residential",theme))%>%
select(datasetid)->datasets
data_kapsarc<-list()
base_url<-"https://datasource.kapsarc.org/api/v2/catalog/datasets/population-by-sex-and-age-group/exports/json?rows=-1&start=0&pretty=true&timezone=UTC"
options(timeout=1000)
##download data sets and store in list of dataframes
for (i in 1:length(datasets$datasetid)){
try({
url<-gsub("population-by-sex-and-age-group",datasets[i,1]$datasetid,base_url)
temp <- tempfile()
download.file(url,temp,mode='wb')
data_kapsarc[[i]]<-fromJSON(temp)
unlink(temp)
},silent=TRUE)
}
Related
I am trying to access data from a dataframe in Octave which satisfy some criterias.
Let us say dataframe name is A with comulns 'Date' and 'Close Price'.
Let us say we intend to access Close prices when Close Price was less than 5.
I am using the command:
A.Close(A.Close<5)
I am getting the following error:
error: subsref.m querying name Close returned positions 5
subsref.m querying name returned positions
I see an instruction video in youtube, where same command is used but no error showed up.
I cannot reproduce the problem. Are you sure your dataframe is correctly initialised?
pkg load dataframe
A = dataframe( [1,2;3,4;5,6;7,8], 'colnames', { 'Date', 'Close Price' } );
A.Close( A.Close < 5 )
% ans =
% 2
% 4
I suspect your error may have to do with the fact that your column name is Close_Price, but you tried to index it via 'Close'. Do you have any other columns that start with the word 'Close' by any chance?
I am new in bigquery, Here I am trying to load the Data in GCP BigQuery table which I have created manually, I have one bash file which contains bq load command -
bq load --source_format=CSV --field_delimiter=$(printf '\u0001') dataset_name.table_name gs://bucket-name/sample_file.csv
My CSV file contains multiple ROWS with 16 column - sample Row is
100563^3b9888^Buckname^https://www.settttt.ff/setlllll/buckkkkk-73d58581.html^Buckcherry^null^null^2019-12-14^23d74444^Reverb^Reading^Pennsylvania^United States^US^40.3356483^-75.9268747
Table schema -
When I am executing bash script file from cloud shell, I am getting following Error -
Waiting on bqjob_r10e3855fc60c6e88_0000016f42380943_1 ... (0s) Current status: DONE
BigQuery error in load operation: Error processing job 'project-name-
staging:bqjob_r10e3855fc60c6e88_0000ug00004521': Error while reading data, error message: CSV
table
encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the errors[] collection
for more details.
Failure details:
- gs://bucket-name/sample_file.csv: Error while
reading data, error message: CSV table references column position
15, but line starting at position:0 contains only 1 columns.
What would be the solution, Thanks in advance
You are trying to insert wrong values to your table per the schema you provided
Based on table schema and your data example I run this command:
./bq load --source_format=CSV --field_delimiter=$(printf '^') mydataset.testLoad /Users/tamirklein/data2.csv
1st error
Failure details:
- Error while reading data, error message: Could not parse '39b888'
as int for field Field2 (position 1) starting at location 0
At this point, I manually removed the b from 39b888 and now I get this
2nd error
Failure details:
- Error while reading data, error message: Could not parse
'14/12/2019' as date for field Field8 (position 7) starting at
location 0
At this point, I changed 14/12/2019 to 2019-12-14 which is BQ date format and now everything is ok
Upload complete.
Waiting on bqjob_r9cb3e4ef5ad596e_0000016f42abd4f6_1 ... (0s) Current status: DONE
You will need to clean your data before upload or use a data sample with more lines with --max_bad_records flag (Some of the lines will be ok and some not based on your data quality)
Note: unfortunately there is no way to control date format during the upload see this answer as a reference
We had the same problem while importing data from local to BigQuery. After researching the data we saw that there data which starting \r or \s enter image description here
After implementing ua['ColumnName'].str.strip() and ua['District'].str.rstrip(). we could add data to Bg.
Thanks
I have an ssis package which uses SQL command to get data from Progress database. Every time I execute the query, it throws this specific error:
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function recPutLONG called from sts_srtt_t:::add_row on (ttbl# 4, len/maxlen/reqlen = 33/32/33) for . Save log for Progress technical support.
I am running the following query:
Select max(ROWID) as maxRowID from TableA
GROUP BY ColumnA,ColumnB,ColumnC,ColumnD
I've had the same error.
After change startup-parameter -SQLTempStorePageSize and -SQLTempStoreBuff to 24 and 3000 respectively the problem was solved.
I think, for you the values must be changed to 40 and 20000.
You can find more information here. The name of the parameter in that article was a bit different than in my Database, it depends on the Progress-version witch is used.
First, I have searched and searched and searched and not found anything that helps me with this.
I have an SSIS project that will fetch a lot of data from an iSeries AS400 and it does this in two very different steps.
Step 1 works perfectly so I manage to fetch tons of info from the AS400, so the connection itself is not the issue.
Step two fails horribly with the following three error codes:
[OLE DB Source [41]] Error: There was an error with OLE DB
Source.Outputs[OLE DB Source Output].Columns[NAME] on OLE DB
Source.Outputs[OLE DB Source Output]. The column status returned was: "Text
was truncated or one or more characters had no match in the target code
page.".
[OLE DB Source [41]] Error: The "OLE DB Source.Outputs[OLE DB Source
Output].Columns[NAME]" failed because truncation occurred, and the
truncation row disposition on "OLE DB Source.Outputs[OLE DB Source
Output].Columns[NAME]" specifies failure on truncation. A truncation error
occurred on the specified object of the specified component.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The
PrimeOutput method on OLE DB Source returned error code 0xC020902A. The
component returned a failure code when the pipeline engine called
PrimeOutput(). The meaning of the failure code is defined by the component,
but the error is fatal and the pipeline stopped executing. There may be
error messages posted before this with more information about the failure.
I have desperately tried to find the solution to this problem and this is what I have done (which have not helped at all):
1 - Advanced Editor on SOURCE -> tab: Input and Output Properties -> OLE DB Source Output -> Output Column changed to
a) 40 (from 28) in length - no change
b) data text (from string) - complete crash
c) changed codepage from 1251 to UTF-8 - no change
2 - Fetched the information with OPENQUERY in MSSMS, it works perfectly.
3 - Screamed in frustration at the screen (didn't help).
I am at roads end. I don't know what to do anymore. Help...?
Yes, this is completely maddening.
There are two sets of columns under OLE DB Source Output: "External Columns" and "Output Columns".
Have you tried changing the lengths of both columns - column "Name" under External Columns" and under "Output Columns"?
This kind of error often happens from a mismatch between the External Column definition and its corresponding Output Column.
In an OLE DB Source, External Columns are supposed to be auto-typed according to the source data types: the external provider is supposed to talk metadata to SSIS, saying "well, this column is typed String(40)", for example. But either the provider or SSIS are often, let's say, "less than entirely competent" at getting the types and lengths right.
UPDATE: Have you tried checking the length of the data in the source, independently of SSIS? Something like:
SELECT MAX(Len(TheReallyAnnoyingColumn)) FROM TheTable
You may find setting the Error Output for Truncation on the Source editor dialog to "Ignore Failure" gets you around the issue.
Update - Truncation Redirect:-
Forced truncation on surname - output set to redirect
and enabled Data Viewer on the error output
then copied the row from the data viewer to notepad to show the error
Running the same dtsx wif truncation set to fail :-
Everybody else is focused on the truncation. I'm curious about the one or more characters had no match in the target code page part of the error message.
How is the column actually defined on the IBM i? I'm particularly interested in the Coded Character Set Identifier (aka CCSID)
In a green screen you can use the Display File Field Description (DSPFFD) command.
You could also use the iNav GUI.
So I want to import a datetime from a txt:
2015-01-22 09:19:59
into a table using a data flow. I have my Flat Source File and my destination DB set up fine. I changed the data type for the txt input for that column in the advanced settings and the input and output properties to:
database timestamp [DT_DBTIMESTAMP]
This is the same data type as the DB used for the table so this should work.
However, when I execute the package I get a error saying the data conversion failed... How do I make this possible?
[Import txt data [1743]] Error: Data conversion failed. The data conversion for column "statdate" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
[Import txt data [1743]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "statdate" (2098)" failed because error code 0xC0209084 occurred, and the error row disposition on "output column "statdate" (2098)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
[Import txt data [1743]] Error: An error occurred while processing file "C:\Program Files\Microsoft SQL Server\MON_Datamart\Sourcefiles\tbl_L30T1.txt" on data row 14939.
On the row he is giving the error the datetime is filled up with spaces. that is why on the table the "allow nulls" is checked but my SSIS package gives the error for some reason... can I somewhere tell the package to allow nulls aswell?
I suggest you import the data in to a character field and then parse it after entry.
The following function should help you:
SELECT IsDate('2015-01-22 09:19:59')
, IsDate(Current_Timestamp)
, IsDate(' ')
, IsDate('')
The IsDate() function returns a 1 when it thinks the value is a date and a 0 when it is not.
This would allow you to do something like:
SELECT value_as_string
, CASE WHEN IsDate(value_as_string) = 1 THEN
Cast(date_as_string As datetime)
ELSE
NULL
END As value_as_datetime
FROM ...
I solved it Myself. Thank you for your suggestion gvee but the way I did it is way easier.
In the Flat File Source when making a new connection in the advanced tab I fixed all the data types according to the table in the database EXCEPT the column with the timestamp (in my case it was called "statdate")! I changed this data type to a STRING because otherwise my Flat File Source would give me a conversion error even before any scripts would have been able to be executed and the only way arround this was setting the error output to ignore failure wich I don't want. (You still have to change the data type after you set it to a string in the advanced settings by right clicking the flat file source -> show advanced editor -> going to the output colums and changing the data type there from Date to string.)
After the timestamp was set to a string I added a Derived Column with this expression to delete all the spaces and give it then "NULL" value:
TRIM(<YourColumnName>) == "" ? (DT_STR,4,1252)NULL(DT_STR,4,1252) : <YourColumnName>
Next I added a Data Conversion to set the string back to a timestamp. The Data conversion is finally connected to the OLE DB Destination.
I hope this helps anyone with the same problem in the future.
End result: Picture of data flow