terraform GCP cloud function without having to deploy via terraform in CI or breaking past deployments when running locally? - google-cloud-functions

I have some working terraform definitions among a larger project:
resource "google_storage_bucket" "owlee_functions_bucket" {
name = "owlee_functions_bucket"
location = "europe-west2"
project = "owlee-software"
}
resource "google_storage_bucket_object" "archive" {
name = "index.zip"
bucket = google_storage_bucket.owlee_functions_bucket.name
source = "../apps/backend/dist/index.zip"
}
resource "google_cloudfunctions_function" "backend_function" {
name = "backend_function"
runtime = "nodejs16"
project = "owlee-software"
region = "europe-west2"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.owlee_functions_bucket.name
source_archive_object = google_storage_bucket_object.archive.name
trigger_http = true
entry_point = "OWLEE"
}
Then I'm trying to deploy via CI, for now, I'm just running terraform apply after zipping up the new version of the function to handle deployment.
It's not great and I'd like to change that to a non-terraform process ideally but that doesn't seem to be documented/possible anywhere which makes me think I have the wrong approach with this.
The second issue which is more urgent to solve --
I want to continue managing my infrastructure locally for now and do not want to have to zip up a new version of the function to deploy everytime I have to run terraform apply locally.
Is there a way -- after its creation -- to avoid overwriting/uploading the function via terraform?
I'm guessing this would be somewhat necessary for the CI deployment to work anyway.
I've looked at a handful of other SO threads but they were looking at specifics around cloud-build and the artifacts registry.

I recommend that you deploy the cloud function by terraform but that the CI of the cloud function is maintained by a cloud build (also created by terraform) I think this is the most logical solution since terraform manages the infrastructure not the implementation of the cloud function.

Instead of using a fixed name as you are, use a random string or depending on needs the commit hash for example. This can be prefixed with other things to make it even more unique.
resource "random_string" "function" {
length = 8
special = false
keepers = {
commit_hash = var.commit_hash,
environment = var.environment,
}
}
resource "google_storage_bucket_object" "archive" {
name = "index.zip"
bucket = google_storage_bucket.owlee_functions_bucket.name
source = "../apps/backend/dist/${random_string.function.result}.zip"
}
resource "google_cloudfunctions_function" "backend_function" {
name = "backend_function"
runtime = "nodejs16"
project = "owlee-software"
region = "europe-west2"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.owlee_functions_bucket.name
source_archive_object = google_storage_bucket_object.archive.name
trigger_http = true
entry_point = "OWLEE"
}
This way if you provide an environmnet such as prod and the same commit hash every time, it will create the same zip file.
If you provide a new environment, say "local", it will generate a new zip. You can then create multiple instances of functions or make more changes to the google_cloudfunctions_function so that it can be used with workspaces

Related

Child process of .net core application is using parent's configuration file

I have 2 .NET Core 2.0 console applications. The first application calls the second one via System.Diagnostics.Process.Start(). Somehow the second app is inheriting the development configuration information located in the appsettings.development.json of the first app.
I execute the first app by running either dotnet run in the root of the project or dotnet firstapp.dll in the folder where the DLL exists. This is started from in Powershell.
Both apps are separate directories. I'm not sure how this is happening.
UPDATE WITH CODE
The apps reside in
C:\Projects\ParentConsoleApp
C:\Projects\ChildConsoleApp
This is how I call the app from parent application:
System.Diagnostics.Process.Start("dotnet", "C:\\projects\\ChildConsoleApp\\bin\\Debug\\netcoreapp2.0\\publish\\ChildConsoleApp.dll" + $" -dt {DateTime.Now.Date.ToString("yyyy-MM-dd")}");
This is how I load the configuration from JSON (this is same in both apps):
class Program
{
private static ILogger<Program> _logger;
public static IConfigurationRoot _configuration;
public static IServiceProvider Container { get; private set; }
static void Main(string[] args)
{
RegisterServices();
_logger = Container.GetRequiredService<ILogger<Program>>();
_logger.LogInformation("Starting GICMON Count Scheduler Service");
Configure();
// At this point DBContext has value from parent! :(
var repo = Container.GetService<ICountRepository>();
var results = repo.Count(_configuration.GetConnectionString("DBContext"), args[0]);
}
private static void Configure()
{
string envvar = "DOTNET_ENVIRONMENT";
string env = Environment.GetEnvironmentVariable(envvar);
if (String.IsNullOrWhiteSpace(env))
throw new ArgumentNullException("DOTNET_ENVIRONMENT", "Environment variable not found.");
_logger.LogInformation($"DOTNET_ENVIRONMENT environment variable value is: {env}.");
var builder = new ConfigurationBuilder().SetBasePath(Directory.GetCurrentDirectory()).AddJsonFile("appsettings.json");
if (!String.IsNullOrWhiteSpace(env)) // environment == "Development"
{
builder.AddJsonFile($"appsettings.{env}.json", optional: true);
}
_configuration = builder.Build();
}
private static void RegisterServices()
{
var services = new ServiceCollection();
services.AddSingleton<ILoggerFactory, LoggerFactory>();
services.AddSingleton(typeof(ILogger<>), typeof(Logger<>));
services.AddLogging((builder) => builder.SetMinimumLevel(LogLevel.Trace));
var serviceProvider = services.BuildServiceProvider();
var loggerFactory = serviceProvider.GetRequiredService<ILoggerFactory>();
loggerFactory.AddNLog(new NLogProviderOptions { CaptureMessageTemplates = true, CaptureMessageProperties = true });
loggerFactory.ConfigureNLog("nlog.config");
Container = serviceProvider;
}
}
The problem is caused by the fact that you set base path for configuration builder to the current working directory:
var builder = new ConfigurationBuilder().SetBasePath(Directory.GetCurrentDirectory()).AddJsonFile("appsettings.json");
When you create a child process, it inherits current directory from the parent process (unless you set current directory explicitly).
So the child process basically uses JSON configs from the directory of parent process.
There are several possible fixes:
Do not set base path to the current directory.
When the application is launched, you don't know for sure that current directory will match directory where application binaries are placed.
If you have an exe file in c:\test\SomeApp.exe and launch it from the command line while the current directory is c:\, then current directory of your application will be c:\. In this case, if you set base path for configuration builder to current directory, it will not be able to load configuration files.
By default, configuration builder loads config files from AppContext.BaseDirectory which is the directory where application binaries are placed. It should be desired behavior in most cases.
So just remove SetBasePath() call:
var builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
If for some reason you want to set the base path of configuration builder to the current directory, then you should set correct current directory for the launched child process:
var childDllPath = "C:\\projects\\ChildConsoleApp\\bin\\Debug\\netcoreapp2.0\\publish\\ChildConsoleApp.dll";
var startInfo = new ProcessStartInfo("dotnet", childDllPath + $" -dt {DateTime.Now.Date.ToString("yyyy-MM-dd")}")
{
WorkingDirectory = Path.GetDirectoryName(childDllPath),
};
Process.Start(startInfo);
As #CodeFuller explained, the reason is that both apps read the same appsettings.{env}.json file. For simplicity, you may just rename the config file (and the corresponding name in .AddJsonFile) for the second app to prevent any possible overrides.
See, when you register JSON file as a configuration source by .AddJsonFile, configuration API allows you to use whatever file name you need and
you are not forced to use the same $"appsettings.{env}.json" pattern for both applications.

How do I SSIS WMI Event Watcher Query for a network folder?

What I'm trying to do in SSIS is have a WMI Event Watcher Task which watches a folder for a file to be created, then does something with it. The primary part is the "watching the folder for file creation".
I have a network folder (full path): \\srvblah10\main\child\target\
All the sites I've gone to has this as an example:
SELECT * FROM __InstanceCreationEvent WITHIN 10
WHERE TargetInstance ISA "CIM_DirectoryContainsFile"
AND TargetInstance.GroupComponent = "Win32_Directory.Name=\"d:\\\\NewFiles\""
Since the folder is a network folder, I can't provide the physical disk letter. So is there a way to use a similar WQL query but for network folder paths as opposed to physical folder paths?
You have to map the drive with a dos command:
net use s: \srvblah10\main\child\target\ /user dotnetN00b Pa$$word
then you can the WMI Event Watcher Task to watch it.
I was trying to do this for awhile, and finally gave up on trying to use the SSIS WMI Event Watcher task, and just wrote the equivalent in a Script task. The issue that was the challenge was getting the WMI Event Watcher to make the remote connection with specific user credentials that I wanted to obtain from a configuration section (not hard code into the package).
The second issue that was going to make not using a script difficult was simply translating the network share, into the local path name on the server, which the Event Watcher requires. You'll see from the scrip below, everything is accomplished with a minimal of effort.
Just an additional heads up, make sure to include the DTS.Variables the script uses in the ReadOnlyVariables (as normal). The code below requires three DTS variables, for example if you are trying to watch for files being dropped in the following location \copernicus\dropoff\SAP\Import, then you would set the variables as shown below:
User::ServerName - the hostname of the server where the share lives
(copernicus)
User::ShareName - the name of the network share
(dropoff)
User::ImportPath - the directory path of the directory to
watch for new files in (/SAP/Import)
public void Main()
{
string localPath = "";
try
{
ConnectionOptions connection = new ConnectionOptions();
connection.Username = "<valid username here>";
connection.Password = "<password here>";
connection.Authority = "ntlmdomain:<your domain name here>";
ManagementScope scope = new ManagementScope(#"\\" + Dts.Variables["User::FileServerName"].Value.ToString() + #"\root\CIMV2", connection);
scope.Connect();
/// Retrieve the local path of the network share from the file server
///
string queryStr = string.Format("SELECT Path FROM Win32_Share WHERE Name='{0}'", Dts.Variables["User::ShareName"].Value.ToString());
ManagementObjectSearcher mosLocalPath = new ManagementObjectSearcher(scope, new ObjectQuery(queryStr));
foreach (ManagementObject elements in mosLocalPath.Get())
{
localPath = elements["Path"].ToString();
}
queryStr = string.Format(
"SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE Targetinstance ISA 'CIM_DirectoryContainsFile' and TargetInstance.GroupComponent=\"Win32_Directory.Name='{0}{1}'\"",
localPath.Replace(#"\", #"\\"),
Dts.Variables["User::ImportPath"].Value.ToString().Replace(#"\", #"\\")); // query requires each seperator to be a double back slash
ManagementEventWatcher watcher = new ManagementEventWatcher(scope, new WqlEventQuery(queryStr));
ManagementBaseObject eventObj = watcher.WaitForNextEvent();
// Cancel the event subscription
watcher.Stop();
Dts.TaskResult = (int)ScriptResults.Success;
}
catch (ManagementException err)
{
Dts.Events.FireError((int)err.ErrorCode, "WMI File Watcher", "An error occurred while trying to receive an event: " + err.Message, String.Empty, 0);
Dts.TaskResult = (int)ScriptResults.Failure;
}
catch (System.UnauthorizedAccessException unauthorizedErr)
{
Dts.Events.FireError((int)ManagementStatus.AccessDenied, "WMI File Watcher", "Connection error (user name or password might be incorrect): " + unauthorizedErr.Message, String.Empty, 0);
Dts.TaskResult = (int)ScriptResults.Failure;
}
}

Mallet Api - Get consistent results

I am new to LDA and mallet. I have the following query
I tried running Mallet-LDA with the command line and by setting the --random-seed to a fixed value, I was able to get consistent results for multiple runs of the algorithm
However, I did try with the Mallet-Java-API and everytime I run the program I get different output.
I did google around and found out that random-seed needs to be fixed and I have it fixed in my java code. I still am getting different results.
Could anyone let me know what other parameters do I need to consider for consistent results (when run multiple times)
I might want to add that train-topics when ran multiple times(command line) yields same result. However, when I rerun import-dir and then run train-topics, the results do not match with previous one. (Probably as expected).
I am ok with running import-dir just once and then experiment with different number of topics and iterations by running train-topics.
Similarly, what needs to be changed/ kept constant if I want to replicate the same when I use Java-Api.
I was able to solve this.
I will respond in detail here:
There are two ways in which Mallet could be run.
a. Command mode
b. Using Java API
To get consistent results for different runs, we need to fix the 'random seed' and in the command line we have an option of setting it. We have no surprises there.
However, while using APIs, though we have an option of setting 'random seed', we need to know that it needs to be done at proper point, else it does not work. (see code)
I have pasted the code here which would create a model(read InstanceList) file from the data
and then we could use the same model file and set the random seed and see to it that we get consistent(read same) results every time we run.
Creating and saving model for later use.
Note: Follow this link to know the format of input file.
http://mallet.cs.umass.edu/ap.txt
public void getModelReady(String inputFile) throws IOException {
if(inputFile != null && (! inputFile.isEmpty())) {
List<Pipe> pipeList = new ArrayList<Pipe>();
pipeList.add(new Target2Label());
pipeList.add(new Input2CharSequence("UTF-8"));
pipeList.add(new CharSequence2TokenSequence());
pipeList.add(new TokenSequenceLowercase());
pipeList.add(new TokenSequenceRemoveStopwords());
pipeList.add(new TokenSequence2FeatureSequence());
Reader fileReader = new InputStreamReader(new FileInputStream(new File(inputFile)), "UTF-8");
CsvIterator ci = new CsvIterator (fileReader, Pattern.compile("^(\\S*)[\\s,]*(\\S*)[\\s,]*(.*)$"),
3, 2, 1); // data, label, name fields
InstanceList instances = new InstanceList(new SerialPipes(pipeList));
instances.addThruPipe(ci);
ObjectOutputStream oos;
oos = new ObjectOutputStream(new FileOutputStream("Resources\\Input\\Model\\Model.vectors"));
oos.writeObject(instances);
oos.close();
}
}
Once model file is saved, this uses the above saved file to generate topics
public void applyLDA(ParallelTopicModel model) throws IOException {
InstanceList training = InstanceList.load (new File("Resources\\Input\\Model\\Model.vectors"));
logger.debug("InstanceList Data loaded.");
if (training.size() > 0 &&
training.get(0) != null) {
Object data = training.get(0).getData();
if (! (data instanceof FeatureSequence)) {
logger.error("Topic modeling currently only supports feature sequences.");
System.exit(1);
}
}
// IT HAS TO BE SET HERE, BEFORE CALLING ADDINSTANCE METHOD.
model.setRandomSeed(5);
model.addInstances(training);
model.estimate();
model.printTopWords(new File("Resources\\Output\\OutputFile\\topic_keys_java.txt"), 25,
false);
model.printDocumentTopics(new File ("Resources\\Output\\OutputFile\\document_topicssplit_java.txt"));
}

Deleted files status unreliably reported in the new Google Drive Android API (GDAA)

This issue has been bugging me since the inception of the new Google Drive Android Api (GDAA).
First discussed here, I hoped it would go away in later releases, but it is still there (as of 2014/03/19). The user-trashed (referring to the 'Remove' action in 'drive.google.com') files/folders keep appearing in both the
Drive.DriveApi.query(_gac, query), and
DriveFolder.queryChildren(_gac, query)
as well as
DriveFolder.listChildren(_gac)
methods, even if used with
Filters.eq(SearchableField.TRASHED, false)
query qualifier, or if I use a filtering construct on the results
for (Metadata md : result.getMetadataBuffer()) {
if ((md == null) || (!md.isDataValid()) || md.isTrashed()) continue;
dMDs.add(new DrvMD(md));
}
Using
Drive.DriveApi.requestSync(_gac);
has no impact. And the time elapsed since the removal varies wildly, my last case was over 12 HOURS. And it is completely random.
What's worse, I can't even rely on EMPTY TRASH in 'drive.google.com', it does not yield any predictable results. Sometime the file status changes to 'isTrashed()' sometimes it disappears from the result list.
As I kept fiddling with this issue, I ended up with the following superawfulhack:
find file with TRASH status equal FALSE
if (file found and is not trashed) {
try to write content
if ( write content fails)
create a new file
}
Not even this helps. The file shows up as healthy even if the file is in the trash (and it's status was double-filtered by query and by metadata test). It can even be happily written into and when inspected in the trash, it is modified.
The conclusion here is that a fix should get higher priority, since it renders multi-platform use of Drive unreliable. It will be discovered by developers right away in the development / debugging process, steering them away.
While waiting for any acknowledgement from the support team, I devised a HACK that allows a workaround for this problem. Using the same principle as in SO 22295903, the logic involves falling back to RESTful API. Basically, dropping the LIST / QUERY functionality of GDAA.
The high level logic is:
query the RESTful API to retrieve the ID/IDs of file(s) in question
use retrieved ID to get GDAA's DriveId via 'fetchDriveId()'
here are the code snippets to document the process:
1/ initialize both GDAA's 'GoogleApiClient' and RESTful's 'services.drive.Drive'
GoogleApiClient _gac;
com.google.api.services.drive.Drive _drvSvc;
void init(Context ctx, String email){
// build GDAA GoogleApiClient
_gac = new GoogleApiClient.Builder(ctx).addApi(com.google.android.gms.drive.Drive.API)
.addScope(com.google.android.gms.drive.Drive.SCOPE_FILE).setAccountName(email)
.addConnectionCallbacks(ctx).addOnConnectionFailedListener(ctx).build();
// build RESTFul (DriveSDKv2) service to fall back to
GoogleAccountCredential crd = GoogleAccountCredential
.usingOAuth2(ctx, Arrays.asList(com.google.api.services.drive.DriveScopes.DRIVE_FILE));
crd.setSelectedAccountName(email);
_drvSvc = new com.google.api.services.drive.Drive.Builder(
AndroidHttp.newCompatibleTransport(), new GsonFactory(), crd).build();
}
2/ method that queries the Drive RESTful API, returning GDAA's DriveId to be used by the app.
String qry = "title = 'MYFILE' and mimeType = 'text/plain' and trashed = false";
DriveId findObject(String qry) throws Exception {
DriveId dId = null;
try {
final FileList gLst = _drvSvc.files().list().setQ(query).setFields("items(id)").execute();
if (gLst.getItems().size() == 1) {
String sId = gLst.getItems().get(0).getId();
dId = Drive.DriveApi.fetchDriveId(_gac, sId).await().getDriveId();
} else if (gLst.getItems().size() > 1)
throw new Exception("more then one folder/file found");
} catch (Exception e) {}
return dId;
}
The findObject() method above (again I'm using the 'await()' flavor for simplicity) returns the the Drive objects correctly, reflecting the trashed status with no noticeable delay (implement in non-UI thread).
Again, I would strongly advice AGAINST leaving this in code longer than necassary since it is a HACK with unpredictable effect on the rest of the system.

Is there a way generate a shortcut file with adobe air?

Good afternoon,
I would like create a application that can can create folders and short cuts to folders in the file system. The user will click a button and it will put a folder on there desktop that has short cuts to files like //server/folder1/folder2 Can you create a desktop shortcut with code in adobe air? How would you do that? How do you create a folder? I keep thinking this should be easy but i keep missing it.
Thank you for your help sorry for the trouble,
Justin
If your deployment profile is Extended Desktop, you may be able to use NativeProcess and some simple scripts that you could package with your app. This approach would entail handling the functionality on a per OS basis, which would take some work and extensive testing. However, I wanted to at least share a scenario that I verified does work. Below is a test case that I threw together:
Test Case: Windows 7
Even though the Adobe documentation says that it prevents execution of .bat files, apparently it doesn't prevent one from executing the Windows Scripting Host: wscript.exe. This means you can execute any JScript or VBScript files. And this is what you would use to write a command to create a shortcut in Windows (since Windows doesn't have a commandline command to create shortcuts otherwise).
Here's a simple script to create a shortcut command, which I found on giannistsakiris.com, (converted to JScript):
// File: mkshortcut.js
var WshShell = new ActiveXObject("WScript.Shell");
var oShellLink = WshShell.CreateShortcut(WScript.Arguments.Named("shortcut") + ".lnk");
oShellLink.TargetPath = WScript.Arguments.Named("target");
oShellLink.WindowStyle = 1;
oShellLink.Save();
If you package this in your application in a folder named utils, you could write a function to create a shortcut like so:
public function createShortcut(target:File, shortcut:File):void {
if (NativeProcess.isSupported) { // Note: this is only true under extendedDesktop profile
var shortcutInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
// Location of the Windows Scripting Host executable
shortcutInfo.executable = new File("C:/Windows/System32/wscript.exe");
// Argument 1: script to execute
shortcutInfo.arguments.push( File.applicationDirectory.resolvePath("utils/mkshortcut.js").nativePath);
// Argument 2: target
shortcutInfo.arguments.push("/target:" + target.nativePath);
// Argument 3: shortcut
shortcutInfo.arguments.push("/shortcut:" + shortcut.nativePath);
var mkShortcutProcess = new NativeProcess();
mkShortcutProcess.start(shortcutInfo);
}
}
If one wanted to create a shortcut to the Application Storage Directory on the Desktop, the following would suffice:
var targetLocation:File = File.applicationStorageDirectory;
var shortcutLocation:File = File.desktopDirectory.resolvePath("Shortcut to My AIR App Storage");
createShortcut(targetLocation, shortcutLocation);
Obviously there's a lot of work to be done to handle different OS environments, but this is at least a step.
As far as I know, File class does not allow the creation of symbolic links. But you can create directories with createDirectory(): http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/filesystem/File.html#createDirectory%28%29
Check if this can be useful: http://www.mikechambers.com/blog/2008/01/17/commandproxy-net-air-integration-proof-of-concept/
Air doesnt let you create shortcuts natively. Here's a workaround that works with Windows [may work on Mac but I don't have a machine to test].
Using Air, create a file that contains the following plain text
[InternetShortcut]
URL=C:\path-to-folder-or-file
Replace path-to-folder-or-file with your folder/file name
Save the file as test.url
Windows recognizes this file as a shortcut.
It is possible to coerce Adobe Air into creating symbolic links, other useful things, on a Mac. Here's how I did it:
You will need AIRAliases.js - Revision: 2.5
In the application.xml add:
<!-- Enables NativeProcess -->
<supportedProfiles>extendedDesktop desktop</supportedProfiles>
In the Air app JavaScript:
// A familiar console logger
var console = {
'log' : function(msg){air.Introspector.Console.log(msg)}
};
if (air.NativeProcess.isSupported) {
var cmdFile = air.File.documentsDirectory.resolvePath("/bin/ln");
if (cmdFile.exists) {
var nativeProcessStartupInfo = new air.NativeProcessStartupInfo();
var processArgs = new air.Vector["<String>"]();
nativeProcessStartupInfo.executable = cmdFile;
processArgs.push("-s");
processArgs.push("< source file path >");
processArgs.push("< link file path >");
nativeProcessStartupInfo.arguments = processArgs;
nativeProcess = new air.NativeProcess();
nativeProcess.addEventListener(air.NativeProcessExitEvent.EXIT, onProcessExit);
nativeProcess.addEventListener(air.ProgressEvent.STANDARD_OUTPUT_DATA, onProcessOutput);
nativeProcess.addEventListener(air.ProgressEvent.STANDARD_ERROR_DATA, onProcessError);
nativeProcess.start(nativeProcessStartupInfo);
} else {
console.log("Can't find cmdFile");
}
} else {
console.log("Not Supported");
}
function onProcessExit(event) {
var result = event.exitCode;
console.log("Exit Code: "+result);
};
function onProcessOutput() {
console.log("Output: "+nativeProcess.standardOutput.readUTFBytes(nativeProcess.standardOutput.bytesAvailable));
};
function onProcessError() {
console.log("Error: "+nativeProcess.standardError.readUTFBytes(nativeProcess.standardError.bytesAvailable));
};
Altering the syntax of the command and parameters passed to NativeProcess you should be able to get real shortcuts on Windows too.