In progress4gl, Iam exporting some values from mutiple procedure files to a single csv file. But when running the second procedure (.p) file the values which I got from the previous file is getting overwritten...How to export all datas of all the procedure files to a single csv file? Thanks in Advance..
The quick answer is to open the second and subsequent outputs to the file as
OUTPUT TO file.txt APPEND.
if that is all you need. If you are looking to do something more complex, then you could define and open a new shared stream in the calling program, and use that stream in each of the called programs, thus only opening and closing the stream once.
If you're using persistent procedures and functions, this answer may help, as it's a little more complex than normal shared streams.
I would realy not suggest to use a SHARED Stream. Especially with persischen Procedures or OO. STREAM-HANDLES provide a more flexible way of distributing the Stream.
So as was previously suggested
on the first job running you do:
OUTPUT TO file.txt.
on all the other jobs running after this you do:
OUTPUT TO file.txt APPEND.
Related
I have approximately 1000 files in local drive.I need to move that files into SQL Server accordingly one after another.
Since local drive having files like file1.csv,file2.csv,..upto file1000.csv.I am sure that number of files in local drive may change dynamically.
I can able to created template for move that files into SQL Server.But i have to process the file2 when file 1 has been completely moved into SQL Server.
Is this possible in NiFi without using Wait\Notify processor?
can anyone please guide me to solve this?
Using EnforceOrder Processor to process files sequentially available in NiFi-1.2.0.bin.
https://gist.github.com/ijokarumawak/7e6158460cfcb0b5911acefbb455edf0
There is a Concurrent Tasks property in processors.
If you will set 1 in each processor they will run sequentially.
But maybe it's better to insert all the files into temp table and then run aggregation on the level of database?
I am looking for a way to implement MySQL and Perl to make a program. Where I'm lost is that I have a .sql file, that creates 3 tables for the Perl program to use. How do you:
1) Execute the file fileName.sql in Perl to create the tables
2) Link those created tables into manipulable variables in Perl Program (like an example being able to add a user to one of the tables)
Execute the file fileName.sql in Perl to create the tables
Usually you would set up the database in advance and use the command line mysql client or a GUI such as PHPMyAdmin to load the .sql file.
You could use a call to system to do the former though.
Link those created tables into manipulable variables in Perl Program
Low level access to databases in Perl is usually handled via the DBI module.
Getting something along the lines of a variable per table calls for an ORM. DBIx::Class is a popular choice for this.
In Perl you use the DBI database interface. In your case, you will also be using something like the DBD::MySQL driver.
There is lots of help available on this topic (including lots of questions on this site).
As for the specific question of your .sql file, there are a few approaches you could take, depending on how fancy you want to get:
You could just copy and paste the commands into your program as you write it.
You could execute an external program that will run the .sql file (for example, by using system()).
You could programmatically read in the .sql file and send the commands from within your program. A module could help you with this (I found SQL::Script on CPAN, which looks useful, though I don't have any experience with it).
I suggest you pick an approach, try it, and ask if you have any specific problems.
Hi guys I am looking for some help with flat files source in data flow task or bulk mail task. Say I have incoming flat files, I can have
a;b;c or a|b|c
is it possible to assign multiple column delimiter for the same flat file source?
I have been searching how to do it
Thank you very much.
the flat file task doesn't support this. See this similar question as reference.
Instead you could use a script task to determine, which delimiter is used and then forward it to the flat file task with the suitable delimiter.
I ran in to a simliar problem and ended up using Swiss File Knife. Just preprocess the file and have it replace commas with pipes or vice versa. That way you only need to have one import.
You also could use a script transform in your flat file reader to use the string.split method. I'd probably go with the SFK option though. It's a bit more transparent, although slightly less portable.
I need to create several databases at once. I have the .BAK file for the dbs and I would like to loop through those files then have SQL create the databases based on the name of the .BAK.
I already have a query to create a database but I seem to be having trouble with the loop.
How would I make SQL server check my .BAK files and create DBs accordingly?
Thanks!
I would take a different approach with this and leave the actual looping done in a small program.
Have the file system handle the files and you can issue a procedure (stored procedure) to do the backup directly from your application.
I know it's not the answer you are after just giving you additional ideas...
I would use an external tool to do something like this.
Use client side scripting to browse the directory (Powershell perhaps?) and then pass the bak file names to the SQL commandline to create the databases.
You can do this with xp_cmdshell, but it's not recommended for the reasons they list in the article.
http://msdn.microsoft.com/en-us/library/ms175046.aspx
The want a Perl script that will write to a data file every time update the Database MySQL. I dont mind about growth of the file since Every Items audited will be stored seperatly
Thank you will Appreciate you Help
The module Log::Log4perl provides many different ways to log events to many types of output including files. It also would allow you to set debug levels to turn this off if you needed to.