I'm using PTC Integrity at my firm. Here we have an Excel file which I need to transfer to our SQL Database with a Perl script.
In Integrity itself, there is a number for a Member Revision. I can see this Revision Number when I type:
echo %MKSSI_REVISION1%
in the command line. I tried to write this in Perl, but it's really hard for me. The Perl script should look for the Excel file Database.xlsx in the path C:\Integrity_Sandbox\Database\Database.xlsx, then read the Member Revision number, and write this number to my SQL Database.
Does anyone have any ideas on how I can manage to do this?
Edit Solution:
my #result = `si revisioninfo --project=/Database/project.pj ´Database.xlsm`;
my $integrity_version = #result[2];
chomp $integrity_version;
my #fields = split(/: /,$integrity_version);
$integrity_version = #fields[1];
chomp $integrity_version;
Bodaggnmo solved his problem with the following Perl code:
my #result = `si revisioninfo --project=/Database/project.pj ´Database.xlsm`;
my $integrity_version = #result[2];
chomp $integrity_version;
my #fields = split(/: /,$integrity_version);
$integrity_version = #fields[1];
chomp $integrity_version;
Related
I have text file(employeedata.txt) with values like this :
ABC#VVV#JHY#ABC#VVV#JHY#ABC#VVV#JHY#ABC#VVV
BBN#NJU#NULL#ABC#VVV#JHY#ABC#VVV#JHY#ABC#OLJ
ABC#BYR#MPL#ABC#VVV#JHY#ABC#TGB#JHY#ABC#NULL
NMP#JDC#NULL#ABC#VVV#JHY#ABC#XCD#JHY#ABC#NULL
UJK#SAB#NULL#ABC#VVV#JHY#ABC#NBG#JHY#ABC#MPL
my text file contains 5,000 lines and I have a Table called Employee with values like this:
id|EmployeLastName|EmployeFirstName|EmployeeAddress
In my file text, in each line, i have EmployeLastName in the first position, EmployeFirstName in the fourth position, EmployeeAddress in the last position
Example :
EmployeLastName#VVV#JHY#EmployeFirstName#VVV#JHY#ABC#VVV#JHY#ABC#EmployeeAddress
Now I want to read text file line by line and insert into table Employee by using perl 5.10.
I am a novice in perl. How can do it?
Well you need to do some reading on DBI driver to get a grip on the code provided bellow -- it is best invested time to work with DB.
NOTE: in this piece of code I read data from internal block __DATA__
use strict;
use warnings;
use Data::Dumper;
use DBI;
my $debug = 0;
my #fields = qw(id last first address); # Field names in Database
my(%record,$rv);
my $hostname = 'db_server_1'; # Database server name
my $database = 'db_employees'; # Database name
my $table = 'db_table'; # Table name
my $port = '3306'; # Database port [default]
# Define DSN
my $dsn = "DBI:mysql:database=$database;host=$hostname;port=$port";
# Connect to Database
my $dbh = DBI->connect($dsn, $user, $password, {RaiseError => 1});
# Define query
my $stq = qq(INSERT INTO $table (id,last,first,address) VALUES(?,?,?,?););
# Prepare query
my $sth = $dbh->prepare($stq);
$dbh->begin_work(); # Ok, we will do insert in one transaction
my $skip = <DATA>; # We skip header in data block
while( <DATA> ) {
#record{#fields} = split /#/; # Fill the hash with record data
print Dumper(\%row) if $debug; # Look at hash in debug mode
$rv = $sth->execute(#record{#fields}); # Execute query with data
print $DBI::errstr if $rv < 0; # If error lets see ERROR message
}
$dbh->commit(); # Commit the transaction
$dbh->disconnect(); # Disconnect from DataBase
__DATA__
id#EmployeLastName#EmployeFirstName#EmployeeAddress
1#Alexander#Makedonsky#267 Mozarella st., Pizza, Italy
2#Vladimir#Lenin#12 Glinka st., Moscow, Italy
3#Donald#Trump#765 Tower ave., Florida, USA
4#Angela#Merkel#789 Schulstrafe st., Berlin, Germany
You can read data from a file with following code
use strict;
use warnings;
my $debug = 1;
my #fields = qw(id last first address); # Field names in Database
my(%record);
my $filename = shift
or die "Provide filename on command line";
open DATA, "< $filename"
or die "Could not open $filename";
while( <DATA> ) {
#record{#fields} = split /#/;
print Dumper(\%record) if $debug;
}
close DATA;
As you very fresh in Perl programming then you probably should start from Learning Perl, then move on Programming Perl, when you get in trouble visit Perl Cookbook and if you decided to dive into database programming Programming the Perl DBI
Well it is nice to know Perl and DBI driver programming. But in your particular case you could load data from a file directly by utilizing MySQL commands.
Loading Data into a Table
Sometimes it is much much easier than it looks at first sight.
Hope I dont upset anybody by asking too simple a question!
I have a requirement to export data from a SQL Server 2012 table, to a CSV file. This needs to be done either every hour, or ideally if it is possible, whenever a new record is created or an existing record is updated/deleted. The table contains a list of all Sites we maintain. I need to export this CSV file to a particular location, as there is an API from a third party database which monitors this location and imports CSV files from there.
The data to be extracted from SQL is:
Mxmservsite.siteid as Marker_ID, mxmservsite.name as Name, 'SITE' as Group, '3' as Status,
'' as Notes, mxmservsite.zipcode as Post_Code, 'GB' as Country, '' as Latitude,
'' as Longitude, '' as Delete
Where dataareaid='ansa'
Anyone have any clues how I can go about doing this? Sorry, I am a newbie with SQL and still learning the basics! I have searched for similar questions in the past, but havent found anything. I know there is a utility called BCP, but not sure whether that would be the best way, and if it would be, then how do I use it to run every hour, or whenever there is a record update/delete/insert?
Cheers
Here's some powershell that would do what you're after; just schedule it using the Windows Task Scheduler:
function Execute-SQLQuery {
[CmdletBinding()]
param (
[Parameter(Mandatory = $true)]
[string]$DbInstance
,
[Parameter(Mandatory = $true)]
[string]$DbCatalog
,
[Parameter(Mandatory = $true)]
[string]$Query
,
[Parameter(Mandatory = $false)]
[int]$CommandTimeoutSeconds = 30 #this is the SQL default
)
begin {
write-verbose "Call to 'Execute-SQLQuery': BEGIN"
$connectionString = ("Server={0};Database={1};Integrated Security=True;" -f $DbInstance,$DbCatalog)
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
}
process {
write-verbose "`n`n`n-----------------------------------------"
write-verbose "Call to 'Execute-SQLQuery': PROCESS"
write-verbose $query
write-verbose "-----------------------------------------`n`n`n"
$command = $connection.CreateCommand()
$command.CommandTimeout = $CommandTimeoutSeconds
$command.CommandText = $query
$result = $command.ExecuteReader()
$table = new-object “System.Data.DataTable”
$table.Load($result)
Write-Output $table
}
end {
write-verbose "Call to 'Execute-SQLQuery': END"
$connection.Close()
}
}
Execute-SQLQuery -DbInstance 'myServer\InstanceName' -DbCatalog 'myDatabase' -Query #"
select Mxmservsite.siteid as Marker_ID
, mxmservsite.name as Name
, 'SITE' as Group
, '3' as Status
, '' as Notes
, mxmservsite.zipcode as Post_Code
, 'GB' as Country
, '' as Latitude
, '' as Longitude
, '' as Delete
From mxmservsite --this wasn't in your original code
Where dataareaid='ansa'
"# | Export-CSV '.\MyOutputFile.csv' -NoType
To have something triggered on any change is possible; i.e. you could create a trigger on the table, then use xp_cmdshell to execute a script or similar; but that's going to lead to performance problems (triggers are often a bad option if used without being fully understood). Also xp_cmdshell opens you up to some security risks.
There are many other ways to achieve this; currently I have a thing for PowerShell as it gives you loads of flexibility with little overhead.
Another option may be to look into using linked servers to allow your source database to directly update the target without need for CSV.
Another option - create a sql agent job that runs bcp.exe command to do the export for you, at any interval you want (every hour). With bcp.exe, you can specify your file location, column/row terminators, and the filtering query.
If you want to export at every change, you can add an after trigger as mentioned above, and simply exec the sql agent job, which will execute asynchronously. If you are concerned about performance, then you should test it out to understand the impact.
If you like #John's powershell script, stick it in a sql agent job and schedule it, if anything to keep all your SQL tasks centralized.
You'll need to specify the Server Name that you are currently on. You're not able to use a drive letter using D$, but need to use a Shared drive name. The following works in 2012.
-- Declare report variables
DECLARE #REPORT_DIR VARCHAR(4000)
DECLARE #REPORT_FILE VARCHAR(100)
DECLARE #DATETIME_STAMP VARCHAR(14)
DECLARE #Statement VARCHAR(4000)
DECLARE #Command VARCHAR(4000)
--SET variables for the Report File
SET #DATETIME_STAMP = (SELECT CONVERT(VARCHAR(10), GETDATE(), 112) + REPLACE(CONVERT(VARCHAR(8), GETDATE(),108),':','')) -- Date Time Stamp with YYYYMMDDHHMMSS
SET #REPORT_DIR = '\\aServerName\SharedDirectory\' -- Setting where to send the report. The Server name and a Shared name, not a drive letter
SET #REPORT_FILE = #REPORT_DIR + 'Tables_' + #DATETIME_STAMP + '.csv' --the -t below is used for the csv file
--Create the CSV file report with all of the data. The #Statement variable must be used to use variables in the xp_cmdshell command.
SET #Statement = '"SELECT * FROM sys.tables" queryout "'+#REPORT_FILE+'" -c -t"," -r"\n" -S"CurrentServerName\Databasename" -T' --The -S must be used with the -T
SET #Command = 'bcp '+#Statement+' '
EXEC master..xp_cmdshell #Command
I am populating a table in MySQL from a xml file (containing more than a billion lines) using Perl script for finding the lines of interest. The script runs very smoothly till line 15M but after that it starts increasing some what exponentially.
Like for 1st 1000000 lines it took ~12s to parse and write it to database but after 15M lines the time required to parse and write the same number of lines ~43s.
I increased the innodb_buffer_pool_size from 128M to 1024M, as suggested at
Insertion speed slowdown as the table grows in mysql answered by Eric Holmberg
The time requirements came down to ~7s and ~32s respectively but it is still slow as I have a huge file to process and its time requirements keep on increasing.
Also I removed the creation of any Primary key and Index, thought that it might be causing some problem (not sure though)
Below is the code snippet:
$dbh = DBI->connect('dbi:mysql:dbname','user','password') or die "Connection Error: $DBI::errstr\n";
$stmt = "DROP TABLE IF EXISTS dbname";
$sth = $dbh->do($stmt);
$sql = "create table db(id INTEGER not null, type_entry VARCHAR(30) not null, entry VARCHAR(50))";
$sth = $dbh->prepare($sql);
$sth->execute or die "SQL Error: $DBI::errstr\n";
open my $fh1, '<', "file.xml" or die $!;
while (<$fh1>)
{
if ($_=~ m/some pattern/g)
{
$_=~ s/some pattern//gi;
$id = $_;
}
elsif ($_=~ m/some other pattern/)
{
$_=~ s/\s|(\some other pattern//gi;
$type = $_;
}
elsif ($_=~ m/still some other pattern/)
{
$_=~ s/still some other pattern//gi;
$entry = $_;
}
if($id ne "" && $type ne "" && $entry ne "")
{
$dbh->do('INSERT INTO dbname (id, type_entry, species) VALUES (?, ?, ?)', undef, $id, $type, $entry);
}
}
The database would contain around 1.7 million entries. What more can be done to reduce the time?
Thanks in Advance
EDIT 1:
Thank you all for help
Since morning I have been trying to implement all that has been told and was checking if I get any significant reduction in time.
So what I did:
I removed matching the pattern twice as told by #ikegami, but yes I do need substitution.
I made use of hash (as told by #ikegami)
I used LOAD DATA LOCAL INFILE (as told by #ikegami, #ysth and #ThisSuitIsBlackNot ). But I have embedded it into my code to take up the file and then process it to database. The file here is dynamically written by the script and when it reaches 1000 entries it is written to the db.
The timings of the run for consecutive 1000000 lines are
13 s
11 s
24 s
22 s
35 s
34 s
47 s
45 s
58 s
57 s .....
(Wanted to post the image but... reputation)
Edit 2:
I checked back the timings and tracked the time required by the script to write it to the database; and to my surprise it is linear. Now what I am concluding from here is that there is some issue with the while loop which I believe increases the time exponentially as it has to go to the line number for every iteration and as it reaches deep into the file it has to count more number of lines to reach the next line.
Any comments on that
EDIT 3
$start_time = time();
$line=0;
open my $fh1, '<', "file.xml" or die $!;
while (<$fh1>)
{
$line++;
%values;
if ($_=~ s/foo//gi)
{
$values{'id'} = $_;
}
elsif ($_=~ s/foo//gi)
{
$values{'type'} = $_;
}
elsif ($_=~ s/foo//gi)
{
$values{'pattern'} = $_;
}
if (keys(%values) == 3)
{
$no_lines++;
open FILE, ">>temp.txt" or die $!;
print FILE "$values{'id'}\t$values{'type'}\t$values{'pattern'}\n";
close FILE;
if ($no_lines == 1000)
{
#write it to database using `LOAD DATA LOCAL INFILE` and unlink the temp.txt file
}
undef %values;
}
if($line == ($line1+1000000))
{
$line1=$line;
$read_time = time();
$processing_time = $read_time - $start_time - $processing_time;
print "xml file parsed till line $line, time taken $processing_time sec\n";
}
}
ANSWER:
First, I would like to apologize to take so long to reply; as I started again from root to top for Perl and this time came clear with use strict, which helped me in maintaining the linear time. And also the use of XML Parsers is a good thing to do while handling large Xml files..
And to add more, there is nothing with the speed of MySQL inserts it is always linear
Thanks all for help and suggestions
I'm guessing the bottleneck is the actual insertion. It will surely be a bit faster to generate the INSERT statements, place them in a file, then execute the file using the mysql command line tool.
You can experiment with creating INSERT statements that insert a large number of rows vs individual statements.
Or may it's best to avoid INSERT statements entirely. I think the mysql command line tool has a facility to populate a database from a CSV file. That might possibly yield a little bit more speed.
Better yet, you can use LOAD DATA INFILE if you have access to the file system of the machine hosting the database.
Your Perl code could also use some cleaning up.
You search for every pattern twice? Change
if (/foo/) { s/foo//gi; $id = $_ }
to
if (s/foo//gi) { $id = $_ }
Actually, do you need a substitution at all? This might be faster
if (/foo (.*)/) { $id = $1 }
Looks like you might be able to do something more along the lines of
my ($k, $v) = split(/:\s*/);
$row{$k} = $v;
instead of that giant if.
Also, if you use a hash, then you can use the following for the last check:
if (keys(%row) == 3)
I have a Perl script that reads in data from a database and prints out the result in HTML forms/tables. The form of each book also contains a submit button.
I want Perl to create a text file (or read into one already created) and print the title of the book that was inside the form submitted. But I can't seem to get param() to catch the submit action!
#!/usr/bin/perl -w
use warnings; # Allow for warnings to be sent if error's occur
use CGI; # Include CGI.pm module
use DBI;
use DBD::mysql; # Database data will come from mysql
my $dbh = DBI->connect('DBI:mysql:name?book_store', 'name', 'password')
or die("Could not make connection to database: $DBI::errstr"); # connect to the database with address and pass or return error
my $q = new CGI; # CGI object for basic stuff
my $ip = $q->remote_host(); # Get the user's ip
my $term = $q->param('searchterm'); # Set the search char to $term
$term =~ tr/A-Z/a-z/; # set all characters to lowercase for convenience of search
my $sql = '
SELECT *
FROM Books
WHERE Title LIKE ?
OR Description LIKE ?
OR Author LIKE ?
'; # Set the query string to search the database
my $sth = $dbh->prepare($sql); # Prepare to connect to the database
$sth->execute("%$term%", "%$term%", "%$term%")
or die "SQL Error: $DBI::errstr\n"; # Connect to the database or return an error
print $q->header;
print "<html>";
print "<body>";
print " <form name='book' action='bookcart.php' method=post> "; # Open a form for submitting the result of book selection
print "<table width=\"100%\" border=\"0\"> ";
my $title = $data[0];
my $desc = $data[1];
my $author = $data[2];
my $pub = $data[3];
my $isbn = $data[4];
my $photo = $data[5];
print "<tr> <td width=50%>Title: $title</td> <td width=50% rowspan=5><img src=$photo height=300px></td></tr><tr><td>Discreption Tags: $desc</td></tr><tr><td>Publication Date: $pub</td></tr><tr><td>Author: $author</td></tr><tr><td>ISBN: $isbn</td> </tr></table> <br>";
print "Add this to shopping cart:<input type='submit' name='submit' value='Add'>";
if ($q->param('submit')) {
open(FILE, ">>'$ip'.txt");
print FILE "$title\n";
close(FILE);
}
print "</form>"; # Close the form for submitting to shopping cart
You haven't used use strict, to force you to declare all your variables. This is a bad idea
You have used remote_host, which is the name of the client host system. Your server may not be able to resolve this value, in which case it will remain unset. If you want the IP address, use remote_addr
You have prepared and executed your SQL statement but have fetched no data from the query. You appear to expect the results to be in the array #data, but you haven't declared this array. You would have been told about this had you had use strict in effect
You have used the string '$ip'.txt for your file names so, if you were correctly using the IP address in stead of the host name, your files would look like '92.17.182.165'.txt. Do you really want the single quotes in there?
You don't check the status of your open call, so you have no idea whether the open succeeded, or the reason why it may have failed
I doubt if you have really spent the last 48 hours coding this. I think it is much more likely that you are throwing something together in a rush at the last minute, and using Stack Overflow to help you out of the hole you have dug for yourself.
Before asking for the aid of others you should at least use minimal good-practice coding methods such as applying use strict. You should also try your best to debug your code: it would have taken very little to find that $ip has the wrong value and #data is empty.
Use strict and warnings. You want to use strict for many reasons. A decent article on this is over at perlmonks, you can begin with this. Using strict and warnings
You don't necessarily need the following line, you are using DBI and can access mysql strictly with DBI.
use DBD::mysql;
Many of options are available with CGI, I would recommend reading the perldoc on this also based on user preferences and desired wants and needs.
I would not use the following:
my $q = new CGI;
# I would use as so..
my $q = CGI->new;
Use remote_addr instead of remote_host to retrieve your ip address.
The following line you are converting all uppercase to lowercase, unless it's a need to specifically read from your database with all lowercase, I find this useless.
$term =~ tr/A-Z/a-z/;
Next your $sql line, again user preference, but I would look into sprintf or using it directly inside your calls. Also you are trying to read an array of data that does not exist, where is the call to get back your data? I recommend reading the documentation for DBI also, many methods of returning your data. So you want your data back using an array for example...
Here is an untested example and hint to help get you started.
use strict;
use warnings;
use CGI qw( :standard );
use CGI::Carp qw( fatalsToBrowser ); # Track your syntax errors
use DBI;
# Get IP Address
my $ip = $ENV{'REMOTE_ADDR'};
# Get your query from param,
# I would also parse your data here
my $term = param('searchterm') || undef;
my $dbh = DBI->connect('DBI:mysql:db:host', 'user', 'pass',
{RaiseError => 1}) or die $DBI::errstr;
my $sql = sprintf ('SELECT * FROM Books WHERE Title LIKE %s
OR Description LIKE %s', $term, $term);
my $sth = $dbh->selectall_arrayref( $sql );
# Retrieve your result data from array ref and turn into
# a hash that has title for the key and a array ref to the data.
my %rows = ();
for my $i ( 0..$#{$sth} ) {
my ($title, $desc, $author, $pub, $isbn, $pic) = #{$sth->[$i]};
$rows{$title} = [ $desc, $author, $pub, $isbn, $pic ];
}
# Storing your table/column names
# in an array for mapping later.
my #cols;
$cols[0] = Tr(th('Title'), th('Desc'), th('Author'),
th('Published'), th('ISBN'), th('Photo'));
foreach (keys %rows) {
push #cols, Tr( td($_),
td($rows{$_}->[0]),
td($rows{$_}->[1]),
td($rows{$_}->[2]),
td($rows{$_}->[3]),
td(img({-src => $rows{$_}->[4]}));
}
print header,
start_html(-title => 'Example'),
start_form(-method => 'POST', -action => 'bookcart.php'), "\n",
table( {-border => undef, -width => '100%'}, #cols ),
submit(-name => 'Submit', -value => 'Add Entry'),
end_form,
end_html;
# Do something with if submit is clicked..
if ( param('Submit') ) {
......
}
This assumes that you're using the OO approach to CGI.pm, and that $q is the relevant object. This should work, assuming that you have $q = new CGI somewhere in your script.
Can you post the rest of the script?
I've created a mockup to test this, and it works as expected:
#!/usr/bin/perl
use CGI;
my $q = new CGI;
print $q->header;
print "<form><input type=submit name=submit value='add'></form>\n";
if ($q->param('submit')) {
print "submit is \"" . $q->param('submit') . "\"\n";
}
After the submit button is clicked, the page displays that submit is "add" which means the evaluation is going as planned.
I guess what you need to do is make sure that $q is your CGI object, and move forward from there.
Script works well when run manually, but when I schdule it in cronjob it shows :
malformed JSON string, neither array, object, number, string or atom, at character offset 0 (before "<html>\r\n<head><tit...") at /usr/local/lib/perl5/site_perl/5.14.2/JSON.pm line 171.
script itself:
#rest config vaiables
$ENV{'PERL_LWP_SSL_VERIFY_NONE'} = 0;
print "test\n";
my $client = REST::Client->new();
$client->addHeader('Authorization', 'Basic YWRtaW46cmFyaXRhbg==');
$client->addHeader('content_type', 'application/json');
$client->addHeader('accept', 'application/json');
$client->setHost('http://10.10.10.10');
$client->setTimeout(1000);
$useragent = $client->getUseragent();
print "test\n";
#Getting racks by pod
$req = '/api/v2/racks?name_like=2t';
#print " rekvest {$req}\n";
$client->request('GET', qq($req));
$racks = from_json($client->responseContent());
$datadump = Dumper (from_json($client->responseContent()));
crontab -l
*/2 * * * * /usr/local/bin/perl /folder/api/2t.pl > /dmitry/api/damnitout 2>&1
Appreciate any suggestion
Thank you,
Dmitry
It is difficult to say what is really happening, but in my experience 99% issues of running stuff in crontab stems from differences in environment variables.
Typical way to debug this: in the beginning of your script add block like this:
foreach my $key (keys %ENV) {
print "$key = $ENV{$key}\n";
}
Run it in console, look at the output, save it in log file.
Now, repeat the same in crontab and save it into log file (you have already done that - this is good).
See if there is any difference in environment variables when trying to run it both ways and try to fix it. In Perl, probably easiest is to alter environment by changing %ENV. After all differences are sorted out, there is no reason for this to not work right.
Good luck!