I am new to spring batch and batch admin. I stuck in a scenario where i want to use multiple datasource. i.e. One for batch meta-data and business schema(application tables).
I am using below code in my batch-mysql.properties file.
For batch matadata tables
batch.jdbc.driver=com.mysql.jdbc.Driver
batch.jdbc.url=jdbc:mysql://localhost:3306/batch
batch.jdbc.user=root
batch.jdbc.password=root
batch.jdbc.testWhileIdle=true
batch.jdbc.validationQuery=SELECT 1
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-mysql.sql
batch.schema.script=classpath:/org/springframework/batch/core/schema-mysql.sql
batch.business.schema.script=classpath*:business-schema-mysql.sql
For application schema
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://localhost:3306/applicationschema
db.user=root
db.password=root
if i remove below line of code
batch.business.schema.script=classpath*:business-schema-mysql.sql
then i am getting an exception that above property couldn't found.
if keep as it is then it is creating application table in batch matadata schema.
Just don't supply any value to the property batch.business.schema.script. Try to leave it like shown below :
batch.business.schema.script=
Also, if you don't want to drop and create batch meta tables every time your application is started, you should set batch.data.source.init=false in your batch properties file.
EDIT : This is how my batch-oracle.properties file looks like :
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.OracleSequenceMaxValueIncrementer
batch.isolationlevel=READ_COMMITTED
batch.business.schema.script=
batch.data.source.init=false
batch.job.service.reaper.interval=6000
batch.schema.script=classpath:/org/springframework/batch/core/schema-drop-oracle10g.sql
batch.jdbc.url=jdbc:oracle:thin:#localhost:1521:mydb
batch.table.prefix=BATCH_
batch.lob.handler.class=org.springframework.jdbc.support.lob.DefaultLobHandler
batch.verify.cursor.position=true
batch.jdbc.validationQuery=SELECT 1 FROM dual
batch.jdbc.password=mypassword
batch.jdbc.testWhileIdle=false
batch.jdbc.user=user
batch.jdbc.pool.size=5
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-oracle10g.sql
Related
I'm new to the whole pimcore thing. I am trying to play around and create classes. The issue is, I am not able to create more than 1 class, and in the database it is nameless, so when I try to create another class, it also tries to store it in the database with no name, which ends up showing an SQL error saying that there is a duplicate entry. Any ideas what the reason behind this could be?
I installed pimcore on an nginx server, I am trying to create classes by choosing Settings->Objects->Classes and then "Add Class", creating the first class was ok, I entered a name for the class and it was successfully added, however the name field in the corresponding database entry is empty, as in empty string ' '. So, when I try to add another class and pimcore attempts to store it in the table "classes", it returns an error saying that it would be a duplicate entry since they both are nameless, i.e. the name entered isn't added. The following error is what I managed to find using developer tools, could be helpful.
[WARN] Unable to parse the JSON returned by the server
minified_javascript_core_f5757da….js?_dc=3708:5684 Error: You're trying to decode an invalid JSON String:
Fatal error: Call to a member function hasChilds() on null in /var/www/html/pimproject/pimcore/modules/admin/controllers/DocumentController.php on line 59
at new Ext.Error (http://192.10.0.0/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:27054)
at Function.Ext.apply.raise (http://192.10.0.10/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:27447)
at Object.Ext.raise (http://192.10.0.10/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:27594)
at Object.Ext.JSON.me.decode (http://192.10.0.10/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:385102)
at Ext.define.onProxyLoad (http://192.10.0.10/website/var/tmp/minified_javascript_core_f5757da9fa29d5bf13e6aa5058eff9f7.js?_dc=3708:5641:28)
at Ext.cmd.derive.triggerCallbacks (http://192.10.0.10/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:594533)
at Ext.cmd.derive.setCompleted (http://192.10.0.10/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:594231)
at Ext.cmd.derive.setException (http://192.10.0.10/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:594444)
at Ext.cmd.derive.process (http://192.10.0.10/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:593638)
at Ext.cmd.derive.processResponse (http://192.10.0.10/pimcore/static6/js/lib/ext/ext-all.js?_dc=3708:22:648303)
Just reinstall Pimcore.
It can be some composer or submodules error.
I strongly recommend for the first installation to run Demo project https://github.com/pimcore/demo not Skeleton, especially if you are using Docker. Later, when you will get the feeling of Pimcore, feel free to install Skeleton or any other project.
Pimcore is stable working for years. If you had some problems before -- nowadays, it is stable.
I am using syslog-ng to parse some logs that I am receiving via a csv-parser. However, I want to achieve insert operations that are a bit more complex than the conventional insert using the "destination" option in syslog-ng. Currently, my destination into MYSQL from my syslog-ng conf file looks like this:
destination d_sql_test
{
sql(
type(mysql)
host('<host>')
username('<user>')
password('<pass>')
database('<db_name>')
table('test')
columns('col1')
values('${val1}')
);
};
However, this simply just inserts the contents of val1 into the column col1. I want to be able to specify my insert "logic" as shown in the example in this question.
I am unsure as to where to actually do this, and if it is even supported by syslog-ng
I think you can do this if you can somehow make the decision within syslog-ng.
You could try to use an in-list() filter to check if the username is already listed in a file. If it is not then, you can send the log into the mysql destination, and also to another destination (possibly a program() destination) that updates the file containing the list of users, and reloads the syslog-ng to update the inlist filter.
You can write a syslog-ng template-function in Python that implements the logic somehow, and for example sets a macro to 1 in the message if it should be sent to the database. Then you can use a filter for this macro in your log path with the mysql destination.
Or if you can write a separate destination that does the work in Python: Writing syslog-ng destinations in Python
Also, you might want to post this question on the syslog-ng mailing list, where the developers notice it more easily.
I have to move data between two SQL Server DBs. My task is to export the data as text (.dat) files, move the files and import into the destination. I have to migrate over 200 tables.
This is what I tried
1) I used a Execute SQL task to fetch my tables.
2) Used a For each loop to loop through the table names from the collection.
3) Used a script task inside the for each loop to build the text file destination path.
4) Called a DFT with the table name in a variable for the source ole db and the path name in a variable for the destination flat file.
First table extracts fine but the second table bombs with a synchronization error. I see this is numerous posts but could not find one that matches my scenario. Hence posting here.
Even if I get the package to work with multiple DFTs, the second table from the second DFT does not export columns because the flat file connection manager still remembers the first table columns. Is there a way to get it to forget the columns?
Any thoughts on how I can export multiple tables to multiple text files using one DFT using dynamic source and destination variable?
Thanks and appreciate your help.
Unfortunately Bulk Import Task only enable us to use format files effectively to map the columns between source and destinations. Bulk Import Task uses BULK INSERT TSQL command to import the data, to execute user should have the BULKADMIN server privilege.
Most of the companies would not allow BULKADMIN server privilege to enable due to security reasons.
Hence using the script task to construct BCP statements is a good and simple option to Export.
You does not require to construct .bat file as script itself can execute dos commands which runs under .NET security account.
I figured out a way to do this. I thought I will share if anybody is stuck in the same situation.
So, in summary, I needed to export and import data via files. I also wanted to use a format file if at all possible for various reasons.
What I did was
1) Construct a DFT which gets me a list of table names from the DB that I need to export. I used 'oledb' as a source and 'recordset destination' as target and stored the table names inside a object variable.
A DFT is not really necessary. You can do it any other way. Also, in our application, we store the table names in a table.
2) Add a 'For each loop container' with a 'For Each ADO Enumerator' which takes my object variable from the previous step into the collection.
3) Parse the variable one by one and construct BCP statements like below inside a Script task. Create variables as necessary. The BCP statement will be stored in a variable.
I loop through the tables and construct multiple BCP statements like this.
BCP "DBNAME.DBO.TABLENAME1" out "PATH\FILENAME2.dat" -S SERVERNAME -T -t"|" -r$\n -f "PATH\filename.fmt"
BCP "DBNAME.DBO.TABLENAME1" out "PATH\FILENAME2.dat" -S SERVERNAME -T -t"|" -r$\n -f "PATH\filename.fmt"
The statements are put inside a .bat file. This is also done inside the script task.
4) A execute process task will next execute the .BAT file. I had to do this because, I do not have the option to use the 'master..xp_cmdShell' command or the 'BULK INSERT' command in my company. If I had the option to execute cmdshell, I could have directly run the command from the package.
5) Again add a 'For each loop container' with a 'For Each ADO Enumerator' which takes my object variable from the previous step into the collection.
6) Parse the variable one by one and construct BCP statements like this inside a Script task. Create variables as necessary. The BCP statement will be stored in a variable.
I loop through the tables and construct multiple BCP statements like this.
BCP "DBNAME.DBO.TABLENAME1" in "PATH\FILENAME2.dat" -S SERVERNAME -T -t"|" -r$\n -b10000 -f "PATH\filename.fmt"
BCP "DBNAME.DBO.TABLENAME1" in "PATH\FILENAME2.dat" -S SERVERNAME -T -t"|" -r$\n -b10000 -f "PATH\filename.fmt"
The statements are put inside a .bat file. This is also done inside the script task.
The -b10000 was put so I can import in batches. Without this many of my large tables could not be copied due to less space in the tempdb.
7) Run the .bat file to import the file again.
I am not sure if this is the best solution. I still thought I will share what satisfied my requirement. If my answer is not clear, I would be happy to explain if you have any questions. We can also optimize this solution. The same can be done purely via VB Scripts but you have to write some code to do that.
I also created a package configuration file where I can change the DB name, server name, the data and format file locations dynamically.
Thanks.
Greenplum Database version:
PostgreSQL 8.2.15 (Greenplum Database 4.2.3.0 build 1)
SQL Server Database version:
Microsoft SQL Server 2008 R2 (SP1)
Our current approach:
1) Export each table to a flat file from SQL Server
2) Load the data into Greenplum with pgAdmin III using PSQL Console's psql.exe utility
Benifits...
Speed: OK, but is there anything faster? We load millions of rows of data in minutes
Automation: OK, we call this utility from an SSIS package using a Shell script in VB
Pitfalls...
Reliability: ETL is dependent on the file server to hold the flat files
Security: Lots of potentially sensitive data on the file server
Error handling: It's a problem. psql.exe never raises an error that we can catch even if it does error out and loads no data or a partial file
What else we have tried...
.Net Providers\Odbc Data Provider: We have configured a System DSN using DataDirect 6.0 Greenplum Wire Protocol. Good performance for a DELETE. Dog awful slow for an INSERT.
For reference, this is the aforementioned VB script in SSIS...
Public Sub Main()
Dim v_shell
Dim v_psql As String
v_psql = "C:\Program Files\pgAdmin III\1.10\psql.exe -d "MyGPDatabase" -h "MyGPHost" -p "5432" -U "MyServiceAccount" -f \\MyFileLocation\SSIS_load\sql_files\load_MyTable.sql"
v_shell = Shell(v_psql, AppWinStyle.NormalFocus, True)
End Sub
This is the contents of the "load_MyTable.sql" file...
\copy MyTable from '\\MyFileLocation\SSIS_load\txt_files\MyTable.txt' with delimiter as ';' csv header quote as '"'
If you're getting your data load done in minutes, then the current method is probably good enough. However, if you find yourself having to load larger volumes of data (terabyte scale for instance), the usual preferred method for bulk-loading into Greenplum is via gpfdist and corresponding EXTERNAL TABLE definitions. gpload is a decent wrapper that provides abstraction over much of this process and is driven by YAML control files. The general idea is that gpfdist instance(s) are spun up at the location(s) where your data is staged, preferrably as CSV text files, and then the EXTERNAL TABLE definition within Greenplum is made aware of the URIs for the gpfdist instances. From the admin guide, a sample definition of such an external table could look like this:
CREATE READABLE EXTERNAL TABLE students (
name varchar(20), address varchar(30), age int)
LOCATION ('gpfdist://<host>:<portNum>/file/path/')
FORMAT 'CUSTOM' (formatter=fixedwidth_in,
name=20, address=30, age=4,
preserve_blanks='on',null='NULL');
The above example expects to read text files whose fields from left to right are a 20-character (at most) string, a 30-character string, and an integer. To actually load this data into a staging table inside GP:
CREATE TABLE staging_table AS SELECT * FROM students;
For large volumes of data, this should be the most efficient method since all segment hosts are engaged in the parallel load. Do keep in mind that the simplistic approach above will probably result in a randomly distributed table, which may not be desirable. You'd have to customize your table definitions to specify a distribution key.
I used leaves stru2mysql.prg and vfp2mysql_upload.prg to create a .sql dump file from DBF's. I connect to mysql database from vfp using ODBC.I KNOW how upload the sql dump file but i need to automate the whole process i.e after creating the dump file,my visual foxpro program can upload the dump file without a third party(automatically). I thought of using the source command but that needs to be run in mysql prompt.The assumption here is that my end users dont know how to import(which most of them dont).Please advice on how i can automate importation of sql file to mysql database.thank you
I think what you are looking for are the various SQL* functions in Foxpro. See the VFP help or MSDN on SQLCONNECT (or SQLSTRINGCONNECT), SQLEXEC, and SQLDISCONNECT functions to get you started. Microsoft provided good examples on each in the documentation.
You may also want to use FILETOSTR to get the output from Leafe's programs into a string for the SQLEXEC function.
Here's the steps I use to take data from a Visual FoxPro Database and upload to a MySql Database. These are all put into a custom method on a form, which is fired by a command button. For example the method would be 'uploadnewdata' and I pass parameters for whichever data tables I need
1) Connect to the Server - I use MySql ODBC
2) Validate the user (this uses a SQLEXEC to pull the correct matching record for a users tables
IF M.WorkingDatabase<>-1
nRetVal=SQLEXEC(m.WorkingDatabase,"SELECT * FROM users", "csrUsersOnServer")
SELECT csrUsersOnServer
SELECT userid,FROM csrUsersOnServer;
WHERE ALLTRIM(UPPER(userid))=ALLTRIM(UPPER(lcRanchUser));
AND ALLTRIM(UPPER(lcPassWord))=ALLTRIM(UPPER(lchPassWord));
INTO CURSOR ValidUsers
IF _TALLY>=1
ELSE
=MESSAGEBOX("Your Premise ID Does Not Match Any Records On The Server","System Message")
RETURN 0
ENDIF
ELSE
=MESSAGEBOX("Unable To Connect To Your Database", "System Message")
RETURN 0
ENDIF
3) Once that is successful I create my base cursor (this is the one I'm sending from)
4) I then loop through that cursor creating variable for the values in the fields
5) then using the SQLEXEC, and INSERT INTO, I update each record
6) once the program is finished processing the cursor, it generates a messagebox with the 'finished' message and control returns to the form.
All the user has to do, is select the starting table and enter their login information