I have downloaded the BaseElements plugin for Filemaker and managed to get it installed, I have downloaded this specifically to make use of "BE_ExportFieldContents" (https://baseelementsplugin.zendesk.com/hc/en-us/articles/204700538-BE-ExportFieldContents) which basically allows me to export from a Container field on a server side script. I have looked through the documentation and cannot seem to find help.
Now I have the function, I'm completely at a loss on how to actually call the function? I want to export something from the container file to the filemaker documents path - so my question is, where and how the hell do I use this function in Filemaker? Apologies in advance for the noob question.
You make a script where you call this function from the record in question. This script can be run in the client, or via a schedule on FileMaker Server or via the Perform Script on Server script step.
The syntax is like this:
BE_ExportFieldContents ( field ; outputPath )
Where the ‘field’ parameter is the container field and the ‘outputPath’ is where you want the file to end up.
Usually you call such functions via the Set Variable script step. After the execution the variable contains any error or result from the call.
Note that the plugin needs to be installed and enabled on the server for it to work there.
Related
Good evening,
I am currently developing a way to import machine created data from a csv sheet into a database.
The question I have is, is there a way to react to a change in a csv file with Lua.
The file gets a line in this format:
17162H,"801234500001",9/23/2016 12:33:30 PM,"INV"
Every time a scanner is finishing a scan process, added under the old lines, but there is no direct connection to the database, to trigger the script.
It doesn't matter if the change is detected via different file size, foldersize (of the folder that contains the file) or a change within the file information (like date of last opening), but I can't open and read in it permanently due performance reasons.
Also this is the first time I ask here, so sorry for my clunky way, I'll try to improve myself with that over time.
Take a look at linotify, it has lua bindings for inotify and looks like it should do the trick, using the "modify" event to trigger your script.
I use LibUV based variant in my spylog apllication
Usage:
file_monitor(path_to_file, {eol = '\r?\n'}, function(line)
...
end)
If you need to run this on Windows, you can use winapi library, which supports file watchers. Here is an example of how it's used in one of my projects; you'll need to call winapi.sleep() to allow time for the check to trigger.
Day # 2 of SQL -
I am trying to run a function that I made yesterday, but SMSS is looking at the "Master" database and not my "Metrics" database so it won't run - it says "Invalid Object Name".
I know this is a simple question, but I'm not even sure what the correct term is. Do I need to change my "scope"? My "focus"? My "active database"? Not sure how to look this up on Google.
Add the line USE Metrics before your function call.
You can also change the database by using the dropdown list on the toolbar in the top left of Management Studio.
And of course, you can also fully qualify your call like this:
SELECT Metrics.dbo.splitstringcomma()
Adding USE YourDatabaseName at the start of all your scripts is a good habit to get into. That's my own preference.
On the SQL Editor Toolbar you have the option to change the available database.
HolTestDB in this example is the current database
Read more from: http://msdn.microsoft.com/en-us/library/ms177264.aspx
I have a script that takes in multiple parameters, and that I've documented with proper help comments (e.g. .SYNOPSIS, .DESCRIPTION, .PARAMETER). Several different users in my organization are going to use this powershell script, some who know powershell and will call it from powershell with specific parameter values, and some who don't know powershell and will simply right-click on the script file in Windows Explorer and choose Run with PowerShell (so the parameters will use their default values).
My conundrum is what is the best way to do this in powershell without a bunch of duplicate code. The way I see it, these are my options:
1 - Just write a DoStuff.ps1 script that provides default values for all parameters. This allows it to be ran directly from Windows Explorer, but feels clunky for the powershell users that want to use it as a function from their own scripts, since instead of writing:
Do-Stuff param1 param1
they will be doing:
.\DoStuff.ps1 param1 param2
2 - Within DoStuff.ps1, move the operations it performs into a DoStuff function, and after the function declaration call the DoStuff function with the parameters passed into the script. This would still allow the script to be ran from Windows Explorer, as well as developers to dot source the script into their own scripts so they can access the function. The downside is that when the developers dot source the script, the script is going to call the function with the default parameters (unless I allow them to provide an optional Switch parameter to the script that triggers the function to not be called). Even with this though, it means that I would have to duplicate all of the scripts help text so that it shows for both the script and the function (description, parameter descriptions, etc.).
I can't think of any other options. Ideally I would just be able to write functions in .ps1 file and tag a function with a "default" keyword so that if the script is called, that function is ran by default; but I don't think PowerShell provides anything like this.
What do you think is the best approach in this situation. Is there something I'm overlooking or don't know about? Thanks.
but feels clunky for the powershell users that want to use it as a function from their own scripts
Default parameters would seem, based on your description, to be the best (or, at least, least-worse) approach.
But rather than naming your script DoStuff.ps1 name it and call it so it can be called more like an internal function:
Name it with the dash: Do-Stuff.ps1
Remember you don't need to specify the ps1
If the script is in a folder in $env:Path then you don't need to specify a path.
Also consider a script can load a module from a relative path: you could put most of the code in a script module which the front end (right click on it) script loads and calls into it. Script authors load the module themselves.
Ok, I feel embarrassed that I wasn't able to figure this out on my own, but after a few wasted hours, I figured it would be easier to simply ask over here:
I have a bunch of .gs-files in my Google Apps Script project. Now, I want to call another file's function from a method (something like AnotherClass.awesomeFunction(), which throws a ReferenceError though). Is this possible in Google Apps Script? If so, how?
Files aren't classes. You can call any functions in any file from any other file. Think of your files as if they were just added together before running. If you want class-like scoping you can use the Libraries feature.
The Above replies are correct above file being appended, make sure the order of the files in the file explorer on the app script project page is correct
The Function definition should be in the first file and the function call in the latter.
You change the option of the each file by clicking the 3 dots next to file name and selecting Move file up or Move file down
The following syntax allows you to call any function from within your Google Apps Script project, regardless of whether the function is defined in the same file that is calling it:
myFunction();
The following code is unnecessary and will throw errors:
google.script.run.myFunction();
It can do.
and Corey is right, files is not class.
I'd just like to add that order of files is not important as experienced by me so far. I'm working on a project where all calls are at the start to get a clear tree and all definitions of functions are at the end. Sometimes they're even mixed without any order within files too. So, I guess, it can call function from anywhere regardless of order within file or within project files. It's working in my case though.
I am trying to write a script that will parse a local file and upload its contents to a MySQL database. Right now, I am thinking that a batch script that runs a Perl script would work, but am not sure if this is the best method of accomplishing this.
In addition, I would like this script to run immediately when the data file is added to a certain directory. Is this possible in Windows?
Thoughts? Feedback? I'm fairly new to Perl and Windows batch scripts, so any guidance would be appreciated.
You can use Win32::ChangeNotify. Your script will be notified when a file is added to the target directory.
Checking a folder for newly created files can be implemented using the WMI functionality. Namely, you can create a Perl script that subscribes to the __InstanceCreationEvent WMI event that traces the creation of the CIM_DirectoryContainsFile class instances. Once that kind of event is fired, you know a new file has been added to the folder and can process it as you need.
These articles provide more information on the subject and contain VBScript code samples (hope it won't be hard for you to convert them to Perl):
How Can I Automatically Run a Script Any Time a File is Added to a Folder?
WMI and File System Monitoring
The function you want is ReadDirectoryChangesW. A quick search for a perl wrapper yields this Win32::ReadDirectoryChanges module.
Your script would look something like this:
use Win32::ReadDirectoryChanges;
$rdc = new Win32::ReadDirectoryChanges(path => $path,
subtree => 1,
filter => $filter);
while(1) {
#results = $rdc->read_changes;
while (scalar #results) {
my ($action, $filename) = splice(#results, 0, 2);
... run script ...
}
}
You can easily achieve this in Perl using File::ChangeNotify. This module is to be found on CPAN: http://search.cpan.org/dist/File-ChangeNotify/lib/File/ChangeNotify.pm
You can run the code as a daemon or as a service, make it watch one or more directories and then automatically execute some code (or start up a script) if some condition matches.
Best of all, it's cross-platform, so should you want to switch to a Linux machine or a Mac, it would still work.
It wouldn't be too hard to put together a small C# application that uses the FileSystemWatcher class to detect files being added to a folder and then spawn the required script. It would certainly use less CPU / system resources / hard disk bandwidth than polling the folder at regular intervals.
You need to consider what is a sufficient heuristic for determining "modified".
In increasing order of cost and accuracy:
file size (file content can still be changed as long as size is maintained)
file timestamp (If you aren't running ntpd time is not monotonic)
file sha1sum (bulletproof but expensive)
I would run ntpd, and then loop over the timestamps, and then compare the checksum if the timestamp changes. This can cover a lot of ground in little time.
These methods are not appropriate for a computer security application, they are for file management on a sane system.