Pre-process (classic) ASP page - sql-server-2008

I am running a classic vbscript ASP site with a SQL2008 database. There are a few pages that are processor-heavy, but don't actually change that often. Ideally, I would like the server to process these once a night, perhaps into HTML pages, that can then fly off the server, rather than having to be processed for each user.
Any ideas how I can make this happen?
The application itself works very well, so I am not keen to rewrite the whole thing in another scripting language, even if classic asp is a bit over the hill!!

Yes :
You didn't specify which parts of the pages are "processor heavy" but I will assume it's the query and processing of the SQL data. One idea is to retrieve the data and store it as a cached file, in the filesystem. XML is a good choice for the data format.
Whereas your original code is something like this:
(psuedocode)
get results from database
process results to generate html file
...your modified code can look like this:
check if cache file exists
if not exist
get results from database
store results in cache file
get results from cache file
process results to generate html file.
This is a general caching approach and can be applied to a situation where you've got
query parameters determining the output. Simply generate the name of the cache file based on all the constituent parameters. So if the results depend on query parameters named p1 and p2, then when p1 and p2 have the values 1234 and blue respectively, the cache file might be named cache-1234-blue.xml . If you have 5 distinct queries, you can cache them as query1-1234-blue.xml, query2-1234-blue.xml and so on.
You need not do this "nightly". You can include in your code a cache lifetime, and in place of the "if cache file exists" test, use "if cache file exists and is fresh". To do that just get the last modified timestamp on the cache file, and see if it is older than your cache lifetime.
Function FileOlderThan(fname, age)
'function returns True if the file is older than the age,
' specified in minutes.
Dim LastModified, FSO, DateDifference
Set FSO = CreateObject("Scripting.FileSystemObject")
LastModified = FSO.GetFile(fname).DateLastModified
DateDifference = DateDiff("n", LastModified, Now())
If DateDifference > age Then
FileAge = False
Else
FileAge = True
End If
End Function
fname = Server.MapPath(".") & cacheFileName
If FileOlderThan(fname, 10) Then
... retrieve fresh data ...
End If
This could be 10 minutes, 10 hours, 10 requests, whatever you like.
I said above that XML is a good choice for the data format in the cachefile. ADO has a SaveAsXML method, and you can also generate XML directly from SQL2008 using the FOR XML clause appended to the query.
If the "processor heavy" part is not the query and retrieval, but is the generation of the html page, then you could apply the same sort of approach, but simply cache the html file directly.

Related

How do I download only my purchase invoice documents from Exact Online with Invantive Query Tool?

To comply to regulations, I'm trying to download the purchase invoice documents (as PDF files) from some of my divisions to save them on-disk for archiving purposes.
I use Invantive Query Tool to do this. I like to know which table to use and how to export these attachments only regarding purchase invoice documents.
You can indeed do this by using the export options in Invantive Query Tool or Invantive Data Hub.
What you need is a query that hooks up the document information of type 20 (purchase invoices) with the actual attachment files. You can find a list of types and their description in the DocumentTypes view. You can find the document attachment files in the DocumentAttachmentFiles table.
When you have retrieved that, you can export the documents from that query to disk using a local export documents statement.
The full query is here:
use 123456
select /*+ join_set(dae, document, 10000) */ attachmentfromurl
, dct.division || '/' || dae.id || '-' || filename
filepath
from exactonlinerest..documents dct
join DocumentAttachmentFiles dae
on dae.division = dct.division
and dae.document = dct.id
where dct.Type = 20
order
by dct.division
, dae.id
local export documents in attachmentfromurl to "c:\temp\docs" filename column Filepath
Make sure to set the ID of the division right in the use statement (this is the technical ID, not the 'division number', which can contain duplicates). You can find that in the top menu bar under Partitions. Or simply use use all to get the documents from all divisions (this might take a while).
Also set the file path right where it says c:\temp\docs now. Then hit F5 in the Query Tool to execute, or run the script from Data Hub.

processing multiple files in business objects data services

I am new to the Business Objects Data services.
I have to run a dataflow reading from a file. Filename should be read based on wild chars like Platform. And I want to run the dataflow only if the file exists, if file is not present , it should not error out or should not do anything but it should just move on to the next dataflow or workflow in the job.
I tried below code to check if the file exists as built_in function File_Exists cannot check the file based on wild chars.
*$FILEEXISTSFLAG= exec('/bin/ksh',' "ls xxxxxx/Platform.csv',8);*
My intention is based on the value assigned to $FILEEXISTSFLAG from above code, I will decide whether to execute the data flow or not (if $FILEEXISTSFLAG is null do nothing otherwise execute the data flow ) but its retrieving below output.
*ls: cannot access /xxxxxx/Platform.csv: No such file*
Is there any other way to achieve this?
I was able to solve the above problem by using the index function.
$FILEEXISTSFLAG is containing a value like "ls: cannot access Platform: No such file or directory ". So, I have used the index function as below. So if the output is not null for below index function, it will execute the dataflow, otherwise it will do nothing.
index( $FILEEXISTSFLAG , 'No such file',1)
Thanks,
Phani.

Importing flat file which has changing column order using SSIS [duplicate]

Problem.
I regularly receive a feed files from different suppliers. Although the column names are consistent the problem comes when some suppliers send text files with more or less columns in there feed file.
Furthermore the arrangement of these files are inconsistent.
Other than the Dynamic data flow task provided by Cozy Roc is there another way I could import these files. I am not a C# guru but i am driven torwards using a "Script Task" control flow or "Script Component" Data flow task.
Any suggestion, samples or direction will greatly be appreciated.
http://www.cozyroc.com/ssis/data-flow-task
Some forums
http://www.sqlservercentral.com/Forums/Topic525799-148-1.aspx#bm526400
http://www.bidn.com/forums/microsoft-business-intelligence/integration-services/26/dynamic-data-flow
Off the top of my head, I have a 50% solution for you.
The problem
SSIS really cares about meta data so variations in it tend to result in exceptions. DTS was far more forgiving in this sense. That strong need for consistent meta data makes use of the Flat File Source troublesome.
Query based solution
If the problem is the component, let's not use it. What I like about this approach is that conceptually, it's the same as querying a table-the order of columns does not matter nor does the presence of extra columns matter.
Variables
I created 3 variables, all of type string: CurrentFileName, InputFolder and Query.
InputFolder is hard wired to the source folder. In my example, it's C:\ssisdata\Kipreal
CurrentFileName is the name of a file. During design time, it was input5columns.csv but that will change at run time.
Query is an expression "SELECT col1, col2, col3, col4, col5 FROM " + #[User::CurrentFilename]
Connection manager
Set up a connection to the input file using the JET OLEDB driver. After creating it as described in the linked article, I renamed it to FileOLEDB and set an expression on the ConnectionManager of "Data Source=" + #[User::InputFolder] + ";Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=\"text;HDR=Yes;FMT=CSVDelimited;\";"
Control Flow
My Control Flow looks like a Data flow task nested in a Foreach file enumerator
Foreach File Enumerator
My Foreach File enumerator is configured to operate on files. I put an expression on the Directory for #[User::InputFolder] Notice that at this point, if the value of that folder needs to change, it'll correctly be updated in both the Connection Manager and the file enumerator. In "Retrieve file name", instead of the default "Fully Qualified", choose "Name and Extension"
In the Variable Mappings tab, assign the value to our #[User::CurrentFileName] variable
At this point, each iteration of the loop will change the value of the #[User::Query to reflect the current file name.
Data Flow
This is actually the easiest piece. Use an OLE DB source and wire it as indicated.
Use the FileOLEDB connection manager and change the Data Access mode to "SQL Command from variable." Use the #[User::Query] variable in there, click OK and you're ready to work.
Sample data
I created two sample files input5columns.csv and input7columns.csv All of the columns of 5 are in 7 but 7 has them in a different order (col2 is ordinal position 2 and 6). I negated all the values in 7 to make it readily apparent which file is being operated on.
col1,col3,col2,col5,col4
1,3,2,5,4
1111,3333,2222,5555,4444
11,33,22,55,44
111,333,222,555,444
and
col1,col3,col7,col5,col4,col6,col2
-1111,-3333,-7777,-5555,-4444,-6666,-2222
-111,-333,-777,-555,-444,-666,-222
-1,-3,-7,-5,-4,-6,-2
-11,-33,-77,-55,-44,-666,-222
Running the package results in these two screen shots
What's missing
I don't know of a way to tell the query based approach that it's OK if a column doesn't exist. If there's a unique key, I suppose you could define your query to have only the columns that must be there and then perform lookups against the file to try and obtain the columns that ought to be there and not fail the lookup if the column doesn't exist. Pretty kludgey though.
Our solution. We use parent child packages. In the parent pacakge we take the individual client files and transform them to our standard format files then call the child package to process the standard import using the file we created. This only works if the client is consistent in what they send though, if they try to change their format from what they agreed to send us, we return the file.

ssis 2005, write on excel files [duplicate]

I am working with SSIS 2008. I have a select query name sqlquery1 that returns some rows:
aq
dr
tb
This query is not implemented on the SSIS at the moment.
I am calling a stored procedure from an OLE DB Source within a Data Flow Task. I would like to pass the data obtained from the query to the stored procedure parameter.
Example:
I would like to call the stored procedure by passing the first value aq
storedProdecure1 'aq'
then pass the second value dr
storedProdecure1 'dr'
I guess it would be something like a cycle. I need this because the data generated by the OLE DB Source through the stored procedure needs to be sent to another destination and this must be done for each record of the sqlquery1.
I would like to know how to call the query sqlquery1 and pass its output to call another stored procedure.
How do I need to do this in SSIS?
Conceptually, what your solution will look like is an execute your source query to generate your result set. Store that into a variable and then you'll need to do iterate through those results and for each row, you'll want to call your stored procedure with that row's value and send the results into a new Excel file.
I'd envision your package looking something like this
An Execute SQL Task, named "SQL Load Recordset", attached to a Foreach Loop Container, named "FELC Shred Recordset". Nested inside there I have a File System Task, named "FST Copy Template" which is a precedence for a Data Flow Task, named "DFT Generate Output".
Set up
As you're a beginner, I'm going to try and explain in detail. To save yourself some hassle, grab a copy of BIDSHelper. It's a free, open source tool that improves the design experience in BIDS/SSDT.
Variables
Click on the background of your Control Flow. With nothing selected, right-click and select Variables. In the new window that pops up, click the button that creates a New Variable 4 times. The reason for clicking on nothing is that until SQL Server 2012, the default behaviour of variable creation is to create them at the scope of the current object. This has resulted in many lost hairs for new and experienced developers alike. Variable names are case sensitive so be aware of that as well.
Rename Variable to RecordSet. Change the Data type from Int32 to Object
Rename Variable1 to ParameterValue. Change the data type from Int32 to String
Rename Variable2 to TemplateFile. Change the data type from Int32 to String. Set the value to the path of your output Excel File. I used C:\ssisdata\ShredRecordset.xlsx
Rename Variable 4 to OutputFileName. Change the data type from Int32 to String. Here we're going to do something slightly advanced. Click on the variable and hit F4 to bring up the Properties window. Change the value of EvaluateAsExpression to True. In Expression, set it to "C:\\ssisdata\\ShredRecordset." + #[User::ParameterValue] + ".xlsx" (or whatever your file and path are). What this does, is configures a variable to change as the value of ParameterValue changes. This helps ensure we get a unique file name. You're welcome to change naming convention as needed. Note that you need to escape the \ any time you are in an expression.
Connection Managers
I have made the assumption you are using an OLE DB connection manager. Mine is named FOO. If you are using ADO.NET the concepts will be similar but there will be nuances pertaining to parameters and such.
You will also need a second Connection Manager to handle Excel. If SSIS is temperamental about data types, Excel is flat out psychotic-stab-you-in-the-back-with-a-fork-while-you're-sleeping about data types. We're going to wait and let the data flow actually create this Connection Manager to ensure our types are good.
Source Query to Result Set
The SQL Load Recordset is an instance of the Execute SQL Task. Here I have a simple query to mimic your source.
SELECT 'aq' AS parameterValue
UNION ALL SELECT 'dr'
UNION ALL SELECT 'tb'
What's important to note on the General tab is that I have switched my ResultSet from None to Full result set. Doing this makes the Result Set tab go from being greyed out to usable.
You can observe that I have assigned the Variable Name to the variable we created above (User::RecordSet) and I the Result Name is 0. That is important as the default value, NewResultName doesn't work.
FELC Shred Recordset
Grab a Foreach Loop Container and we will use that to "shred" the results that were generated in the preceding step.
Configure the enumerator as a Foreach ADO Enumerator Use User::RecordSet as your ADO object source variable. Select rows in the first table as your Enumeration mode
On the Variable Mappings tab, you will need to select your variable User::ParameterValue and assign it the Index of 0. This will result in the zerotth element in your recordset object being assigned to the variable ParameterValue. It is important that you have data type agreement as SSIS won't do implicit conversions here.
FST Copy Template
This a File System Task. We are going to copy our template Excel File so that we have a well named output file (has the parameter name in it). Configure it as
IsDestinationPathVariable: True
DestinationVarible: User::OutputFileName
OverwriteDestination: True
Operation: Copy File
IsSourcePathVariable: True
SourceVariable: User::TemplateFile
DFT Generate Output
This is a Data Flow Task. I'm assuming you're just dumping results straight to a file so we'll just need an OLE DB Source and an Excel Destination
OLEDB dbo_storedProcedure1
This is where your data is pulled from your source system with the parameter we shredded in the Control Flow. I am going to write my query in here and use the ? to indicate it has a parameter.
Change your Data access mode to "SQL Command" and in the SQL command text that is available, put your query
EXECUTE dbo.storedProcedure1 ?
I click the Parameters... button and fill it out as shown
Parameters: #parameterValue
Variables: User::ParameterValue
Param direction: Input
Connect an Excel Destination to the OLE DB Source. Double click and in the Excel Connection Manager section, click New... Determine if you're needing 2003 or 2007 format (.xls vs .xlsx) and whether you want your file to have header rows. For you File Path, put in the same value you used for your #User::TemplatePath variable and click OK.
We now need to populate the name of the Excel Sheet. Click that New... button and it may bark that there is not sufficient information about mapping data types. Don't worry, that's semi-standard. It will then pop up a table definition something like
CREATE TABLE `Excel Destination` (
`name` NVARCHAR(35),
`number` INT,
`type` NVARCHAR(3),
`low` INT,
`high` INT,
`status` INT
)
The "table" name is going to be the worksheet name, or precisely, the named data set in the worksheet. I made mine Sheet1 and clicked OK. Now that the sheet exists, select it in the drop down. I went with the Sheet1$ as the target sheet name. Not sure if it makes a difference.
Click the Mappings tab and things should auto-map just fine so click OK.
Finally
At this point, if we ran the package it would overwrite the template file every time. The secret is we need to tell that Excel Connection Manager we just made that it needs to not have a hard coded name.
Click once on the Excel Connection Manager in the Connection Managers tab. In the Properties window, find the Expressions section and click the ellipses ... Here we will configure the Property ExcelFilePath and the Expression we will use is
#[User::OutputFileName]
If your icons and such look different, that's to be expected. This was documented using SSIS 2012. Your work flow will be the same in 2005 and 2008/2008R2 just the skin is different.
If you run this package and it doesn't even start and there is an error about the ACE 12 or Jet 4.0 something not available, then you are on a 64bit machine and need to tell BIDS/SSDT that you want to run in 32 bit mode.
Ensure the Run64BitRuntime value is False. This project setting can be found by right clicking on the project, expand the Configuration Properties and it will be an option under Debugging.
Further reading
A different example of shredding a recordset object can be found on How to automate the execution of a stored procedure with an SSIS package?

Classic ASP/MySQL - Parameter Issue

I have been trying to insert a huge text-editor string in to my database. The application I'm developing allows my client to create and edit their website terms and conditions from the admin part of their website. So as you can imagine, they are incredibly long. I have got to 18,000+ characters in length and I have now received an error when trying to add another load of text.
The error I am receiving is this:
ADODB.Command error '800a0d5d'
Application uses a value of the wrong type for the current operation
Which points to this part of my application, specifically the Set newParameter line:
Const adVarChar = 200
Const adParamInput = 1
Set newParameter = cmdConn.CreateParameter("#policyBody", adVarChar, adParamInput, Len(policyBody), policyBody)
cmdConn.Parameters.Append newParameter
Now this policy I am creating, that is currently 18,000+ characters in length, is only half complete, if that. It could jump to 50 - 60,000! I tried using adLongVarChar = 201 ADO type but this still didn't fix it.
Am I doing the right thing for such a large entry? If I am doing the right thing, how can I fix this issue? ...or if I'm doing the wrong thing, what is the right one?
Try to avoid putting documents in your database if you can. Sometimes it's a reasonable compromise, serialised objects, mark up snippets and such.
If you don't want to query the document with sql the only benefit is the all in one place thing. ie back up your db, you back up your documents as well, and you can use your db connectivity exclusively.
That said nothing is free, carting all that stuff about in your database costs you.
If you can.
have a documents table, User name for the file, and internal name in your documents directory, so the file name is unique in the file system, and a path description, if there could be more than one.
Then just upload and download the selected document as a file, on a get or set of the related database entity.
You'll need to dal with deployment issues, document directory exists, and the account you are running mysql daemon as can see it, but most of the time, the issues you have keeping documents seperate fromthe db, are much easier to deal with than the head scratchers you are running into now.