REP-57054: 'In Process Job Terminated' error while executing oracle reports - oraclereports

When i am executing oracle reports i got above mentioned error.
I m using query with three formula columns and generating XML for RTF.
All formula columns compiled successfully. how to resolve this issue?
Error While Executing

Workaround of setting cacheSize to 50 in $INST_TOP/ora/10.1.2/reports/conf/rwbuilder.conf while waiting for this fix.
When cacheSize is 0 in server conf file, Cache.manage() removes output files in cache directory after a request is successfully finished."
But a non-zero value disables the cache clean up functionality.
Check more details Oracle Doc ID 1237834.1

Related

SSIS ODBC Simba - Error when access table list on ODBC Source \ Destination

I'm using Simba ODBC to create a connection with Google Big Query and using SSIS (Visual Studio 2019) to read and write information on Big Query. The connection works fine and when I use the ODBC Source with the query option, I'm able to get data from Big Query and used it inside SSIS. But when I use the list of tables, I get an error as below:
Exception of HRESULT: 0xC0014020
Error in Data Flow Task[ODBC Source [100]]: SQLSTATE: 42000, Message: [Simba][BigQuery] (70) Invalid query: Invalid dataset ID ""TEST"". Dataset IDs must be alphanumeric (plus underscores and dashes) and must be at most 1024 characters long.
I believe that this happens because the list of tables appear between ("), instead of (`).
Print of table list
The same happens when I use the ODBC Destination. Is there a way to change the format in which the table list appears ?
Obs.: On the Visual Studio 2015 this table list comes with (`) and I can connect with big query just fine.
I can see that the tool is sending "TEST" as the dataset, however, depending on if Visual Studio is using StandardSQL or LegacySQL, the dataset should be specified as:
# LegacySQL
FROM [myproject:TEST.TABLE_TEST]
# StandardSQL
FROM `myproject:TEST.TABLE_TEST`
I was wondering if Visual Studio accepts a custom query or can be parameterized to remove the quotes. If this doesn't help, could you please share the query that cause the error? I understand that there is a query option (I'm not familiar with Visual Studio) and it is not clear for me the exact moment when the tool returns the error, screenshot without sensitive information would be appreciated.
UPDATE:
You can review the following checkpoints that could help to verify that the Simba driver is correctly set up and it is not the cause of the reported error:
Installation. Check that you are using the last version of the driver. The last version usually contains improvements on the driver.
ODBC Configuration. For example, the Step 13 of the link you will be able to see a drop-down list with the datasets available and select one as the default. If you don't have issues is this step, then the issue could be in the tool that uses the ODBC connection.
Language Dialect. In here you case change between StandardSQL or LegacySQL as needed, for example, you can force your tool to use LegacySQL and use the characters [ and ] that I explained above.
Connection String. If your tool allows to use a string with the connection, you might want to use it and explicitly indicating the default Dataset (among other driver options).

how to fix CONVERSION errors after importing SSIS PROJECT

I'm importing a perfectly working SSIS project from TFS.
I have actually a problem with all the packages that contain a data FLOW with a date importation.
I get dozens of this error :
Validation error. DFT Get Date ODBC Source CodeDate2 [63]: The OLE DB provider used by the OLE DB adapter cannot convert between types "DT_BYTES" and "DT_DBDATE" for "Date".
and when I click on the odbc source editor, I have the following message:
the metadata of the following output columnsdoes not match the metadata of the external columns with which the output columns are associated:
Output "ODBC Source Output": "Date"
Do you want to replace the metadata of the output columns with the metadata of the external columns?
the fact is that works everywhere but on my computer.
is there an ole db provider component I'm lacking of something like that?
Downgrading will work, but if that's not possible for you, then rewriting your queries may also solve your problem.
In my case I had a Postgres query returning columns of type date. I just converted them all to timestamptz using ::timestamptz. At that point the columns changed from DT_BYTES to DT_DBTIMESTAMP, which was just fine for my purposes.
It might be related to the version of Visual Studio or SSDT.
Try to install SSDT 15.8.0(SSDT previous releases), and run the package in it.
I once saw similar posts on MSDN after the release of Visual Studio 15.9.2
Import from Teradata using ODBC gives VS_NEEDSNEWMETADATA error
ODBC Progress datatype problems after updating to VS 2017 15.9
Same here, I forced the type casting it in the select and it works :
SELECT
[...]
cast(release_date as datetime) as release_date,
[...]
FROM cm_wo

Progress SQL error in ssis package: buffer too small for generated record

I have an ssis package which uses SQL command to get data from Progress database. Every time I execute the query, it throws this specific error:
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function recPutLONG called from sts_srtt_t:::add_row on (ttbl# 4, len/maxlen/reqlen = 33/32/33) for . Save log for Progress technical support.
I am running the following query:
Select max(ROWID) as maxRowID from TableA
GROUP BY ColumnA,ColumnB,ColumnC,ColumnD
I've had the same error.
After change startup-parameter -SQLTempStorePageSize and -SQLTempStoreBuff to 24 and 3000 respectively the problem was solved.
I think, for you the values must be changed to 40 and 20000.
You can find more information here. The name of the parameter in that article was a bit different than in my Database, it depends on the Progress-version witch is used.

import database dump to mysql using visual foxpro

I used leaves stru2mysql.prg and vfp2mysql_upload.prg to create a .sql dump file from DBF's. I connect to mysql database from vfp using ODBC.I KNOW how upload the sql dump file but i need to automate the whole process i.e after creating the dump file,my visual foxpro program can upload the dump file without a third party(automatically). I thought of using the source command but that needs to be run in mysql prompt.The assumption here is that my end users dont know how to import(which most of them dont).Please advice on how i can automate importation of sql file to mysql database.thank you
I think what you are looking for are the various SQL* functions in Foxpro. See the VFP help or MSDN on SQLCONNECT (or SQLSTRINGCONNECT), SQLEXEC, and SQLDISCONNECT functions to get you started. Microsoft provided good examples on each in the documentation.
You may also want to use FILETOSTR to get the output from Leafe's programs into a string for the SQLEXEC function.
Here's the steps I use to take data from a Visual FoxPro Database and upload to a MySql Database. These are all put into a custom method on a form, which is fired by a command button. For example the method would be 'uploadnewdata' and I pass parameters for whichever data tables I need
1) Connect to the Server - I use MySql ODBC
2) Validate the user (this uses a SQLEXEC to pull the correct matching record for a users tables
IF M.WorkingDatabase<>-1
nRetVal=SQLEXEC(m.WorkingDatabase,"SELECT * FROM users", "csrUsersOnServer")
SELECT csrUsersOnServer
SELECT userid,FROM csrUsersOnServer;
WHERE ALLTRIM(UPPER(userid))=ALLTRIM(UPPER(lcRanchUser));
AND ALLTRIM(UPPER(lcPassWord))=ALLTRIM(UPPER(lchPassWord));
INTO CURSOR ValidUsers
IF _TALLY>=1
ELSE
=MESSAGEBOX("Your Premise ID Does Not Match Any Records On The Server","System Message")
RETURN 0
ENDIF
ELSE
=MESSAGEBOX("Unable To Connect To Your Database", "System Message")
RETURN 0
ENDIF
3) Once that is successful I create my base cursor (this is the one I'm sending from)
4) I then loop through that cursor creating variable for the values in the fields
5) then using the SQLEXEC, and INSERT INTO, I update each record
6) once the program is finished processing the cursor, it generates a messagebox with the 'finished' message and control returns to the form.
All the user has to do, is select the starting table and enter their login information

Second RMySQL operation fails - why?

I am running a script that stores different datasets to a MySQL database. This works so far, but only sequentially. e.g.:
# write table1
replaceTable(con,tbl="table1",dframe=dframe1)
# write table2
replaceTable(con,tbl="table2",dframe=dframe2)
If I select both (I use StatET / Eclipse) and run the selection, I get an error:
Error in function (classes, fdef, mtable) :
unable to find an inherited method for function "dbWriteTable",
for signature "MySQLConnection", "data.frame", "data.frame".
I guess this has to do with the fact that my con is still busy or so when the second request is started. When I run the script line after line it just works fine. Hence I wonder, how can I tell R to wait til the first request is ready and then go ahead ? How can I make R scripts interactive (just console like plot examples - no tcl/tk).
EDIT:
require(RMySQL)
replaceTable <- function(con,tbl,dframe){
if(dbExistsTable(con,tbl)){
dbRemoveTable(con,tbl)
dbWriteTable(con,tbl,dframe)
cat("Existing database table updated / overwritten.")
}
else {
dbWriteTable(con,tbl,dframe)
cat("New database table created")
}
}
dbWriteTable has two important arguments:
overwrite: a logical specifying whether to overwrite an existing table
or not. Its default is ‘FALSE’.
append: a logical specifying whether to append to an existing table
in the DBMS. Its default is ‘FALSE’.
For past project I have successfully achieve appending, overwriting, creating, ... of tables with proper combinations of these.