How can I change a file date in AS3? - actionscript-3

In Windows/AIR/Flex I have a scenario where I download a file from a server, currently using FileStream, and I need the file to have the same modification date/time on the client computer as the web server. The file gets created on the client with the then current date/time.
I am already sending the date of the file on the web server for a different purpose. I have reviewed the documentation for File & FileStream and cannot see a property or method which would allow me to create a file with a specific date, or modify a file's modification date.
Is anyone aware of this capability in AIR? (v32.0)
Edit:
The purpose of this is to check to see if the file I have locally is the same as the file on the web server. I currently check the dates on both sides, but the date of the downloaded and it is set to the date/time it was downloaded. I would like it to be the date/time it was created on the file server.

Well you can't change the creation date or modification date. you can check for the modification date using f.modificationDate.getTime() and compare it to something you already have
There is no api in AIR to change the creation date or modification date.

Related

How to retrieve original pdf stored as MySQL mediumblob?

A table containing almost four thousand records includes a mediumblob field for each record that contains the record's associated PDF report. Under both MySQL Workbench and phpMyAdmin the relevant DOCUMENT column displays the data as a BLOB button or link. In the case of phpMyAdmin the link also indicates the size of the data the Blob contains.
The issue is that when the Blob button/link is clicked, under MySQL Workbench opening any of the files using the SQL Editor only displays the raw Blob data and under phpMyAdmin th link only allows the Blob data to be saved as a .bin file instead of displaying or saving the data as a viewable PDF file. All previous attempts to retrieve the original PDFs using PHP have failed - see related earlier thread: Extract Pdf from MySql Dump Saved as Text.
The filename field in the table shows that all the stored files are PDF files. Further research and tests indicate that the mediumblob data has been stored as application/octet-streams.
My question is how can the original PDFs be retrieved as readable PDFs? Is it possible for a .bin file saved from the database to be converted or used to recover the original PDF file?
Any assistance would be greatly appreciated.
In line with my assumption and Isaac's suggestion the only solution was to be able to speak to one of the software developers. It transpires that the documents have been zipped using an third-party library as well as the header being removed before then being stored in the database.
The third-party library used is version 2.0.50727 of Chilkat, available from www.chilkatsoft.com. That version no longer appears to be available, but hopefully at least one of the later versions may do the job.
Thanks again for everyone's input and assistance.
Based on the discussion in the comments, it sounds like you'll need to either refer to the original source code or consult with the original developer to determine exactly how the data was stored.
Using phpMyAdmin to download the mediumblob data as a file will download a .bin file in many cases, I actually don't recall how it determines content type (for instance, a PNG file will download with a .png extension, but most other binary files simply download as a .bin when phpMyAdmin isn't sure what the extension should be, PDF included). So the behavior you're seeing from phpMyAdmin is expected and correct, but since the .bin file doesn't work when it's renamed to .pdf that means something has probably gone wrong with the import and upload.
BLOB data is often stored in a pretty standardized way, but it seems your data doesn't follow that method.
Without us seeing the code directly, we can't guess what exactly happened with storing the data and would only be guessing.

Using Azure Data Factory to read only one file from blob storage and load into a DB

I'd like to read just one file from a blob storage container and load it into a copy operation into a DB, after the arrival of the file has set off a trigger.
Using Microsoft Documentation, the closest I seem to do is read all the file in order of Modified Date.
Would anyone out there know how to read one file after it has arrived in my blob storage?
EDIT:
Just to clarify, I would look to read only the latest file automatically. Without hardcoding the filename.
You can specify a single Blob in the DataSet. This value can be hard coded or variables (using Data Set Parameters):
If you need to run this process whenever a new blob is created/updated, you can use the Event Trigger:
EDIT:
Based on your addition of "only the latest", I don't have a direct solution. Normally, you could use Lookup or GetMetadata activities, but neither they nor the expression language support sorting or ordering. One option would be to use an Azure Function to determine the file to process.
However - if you think about the Event Trigger I mention above, every time it fires the file (blob) is the most recent one in the folder. If you want to coalesce this across a certain period of time, something like this might work:
Logic App 1 on event trigger: store the blob name in a log [blob, SQL, whatever works for you].
Logic App 2 OR ADF pipeline on recurrence trigger: read the log to grab the "most recent" blob name.

Using Apache Nifi to collect files from 3rd party Rest APi - Flow advice

I am trying to create a flow within Apache-Nifi to collect files from a 3rd party RESTful APi and I have set my flow with the following:
InvokeHTTP - ExtractText - PutFile
I can collect the file that I am after, as I have specified this within my Remote URL however when I get all of the data from said file it is outputting multiple (100's) of the same files to my output directory.
3 things I need help with:
1: How do I get the flow to output the file in a readable .csv rather than just a file with no ext
2: How can I stop the processor once I have all of the data that I need
3: The Json file that I have been supplied with gives me the option to get files from a certain date range:
https://api.3rdParty.com/reports/v1/scheduledReports/877800/1553731200000
Or I can choose a specific file:
https://api.3rdParty.com/reports/v1/scheduledReports/download/877800/201904/CTDDaily/2019-04-02T01:50:00Z.csv
But how can I create a command in Nifi to automatically check for newer files, as this process will be running daily and we will be looking at downloading a new file each day.
If this is too broad, please help me by letting me know so I can edit this post.
Thanks.
Note: 3rdParty host name has been renamed to comply with security - therefore links will not directly work. Thanks.
1) You change the filename of the flow file to anything you want using the UpdateAttribute processor. If you want to make it have a ".csv" extension then you can add a property named "filename" with a value of "${filename}.csv" (without the quotes when you enter it).
2) By default most processors have a scheduling strategy of timer-driver 0 seconds, which means keep running as fast as possible. Go to the configuration of the processor on the scheduling tab and configure the appropriate schedule, it sounds like you probably want CRON scheduling to schedule it daily.
3) You can use NiFi expression language statements to create dynamic time ranges. I don't fully understand the syntax for the API that you have to communicate with, but you could do something like this for the URL:
https://api.3rdParty.com/reports/v1/scheduledReports/877800/${now()}
Where now() would return the current timestamp as an epoch.
You can also format it to a date string if necessary:
${now():format('yyyy-MM-dd')}
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html

JMeter Report Dashboard

Most recent version of JMeter has an option to generate Report Dashboard which is great, but i am struggling to customize it to match my needs.
I am running performance tests for every new version of Application.
Lets start from current state of my reports.
I have User-Defined Variable named - Version. I am changing this for every new run of performance test.
Also, there is time stamp as a second type of comparison. - It is possible to compare previous results of the same version. Basically results from yesterday compare to today's results.
I am using Flexible File Writer to save results to csv file. Using this plugin, it is perfectly easy to store version number(User-Defined variable) in every row, which is important for next step.
Results are imported to Excel Pivot table from where you can do basically everything.
Now, this above is ok, but it would be great to have created consolidated report directly from JMeter but there are few problems here.
Report Dashboard is created from JMeter log file and here comes problems:
How to pass User-Defined Variable to log file?
How to make JMeter to continue with adding results to log file?
Currently it is asking to write new filename, so one test-one log file and i need: few tests-one log file.
How to Modify Jmeter properties to be able to compare results of more versions/more dates using JMeter JMeter Report Dashboard? thnx
You could use the JMeter Plugins Merge results.
Add a prefix with the date to the merge results.
For example :
LOGIN for date 1 will be date1:LOGIN or 2017_01_16:LOGIN
LOGIN for date 2 will be date2:LOGIN or 2017_01_17:LOGIN
https://jmeter-plugins.org/wiki/MergeResults/
Regards.
Vincent D.

How to load HTML data into SQL Server (non-table format)?

I'm posting it here because I couldn't' find any such scenario on the web so far. I have a webpage which contains a set of reports both in XLS and PDF formats. I should be downloading the excel files from the page and load into my database. I wish I could use the URL for XLS file directly but the problem is the naming convention may keep changing every time (Sales_Quarter1.xlsx can be Sales_Q1.xlsx the next year). The only thing that would be constant in the following example is "Sales for Calendar Year". I should be looking up for the file that corresponds to this text and download it before loading it into database table.
I would like to know from experts if this would be possible?
<li>
<sub>Sales for Calendar Year 2015--All Countries </sub>
<a href="/Data/Downloads/Documents/Sales/Sales_Quarter1.xlsx">
<sub>[XLS]</sub></a><sub> , <sub>[PDF]</sub><sub>​</sub></sub>
</li>
PS: I am using SQL Server 2014.
Thanks!
Have a look at Integration Services. Create a package for both pulling the web page using a script task, along with a variable name that will represent your downloaded, local filenames for the html file and excel files (you will also have to parse the link out of the html file). Then utilize an Excel Source next in your package.
The variable name for the excel file used in the script task will need to be set to ReadWrite as well.
You can also schedule the resulting package execution via SQL Agent job, if you plan to run this on a reoccurring basis, placing logic into the script or the execution paths,