gcloudignore doesn't allow standard wildcard whitelist - google-cloud-functions

Gcloudignore works like gitignore in that you can exclude certain files from uploading to GCF. Sometimes when you have really large projects with lots of generated files, it can be useful to exclude all files except a few.
.gcloudignore
# Ignore everything
# Or /*
*
# Except the Cloud Function files we want to deploy
!/package.json
!/index.js
The following gcloudignore file gives us: File index.js or function.js that is expected to define function doesn't exist in the root directory. meaning index.js is ignored and can not be read.
However the following ignore file syntax works just fine to deploy:
# Ignore everything
/[!.]*
/.?*
# Except the Cloud Function files we want to deploy
!/package.json
!/index.js
I tried peering into the gcloud program code, but I was wondering if anyone knows why this is the case?

I ran into this problem as well.
I found a workaround -- explicitly allow the current dir:
# Ignore everything by default
*
# Allow files explicitly
!foo.bar
!*.baz
# Explicitly allow current dir. `gcloud deploy` fails without it.
!.
Works for me. Don't know why.

OperationError: code=13, message=Error setting up the execution environment for your function. Please try deploying again after a few minutes.
I just got this problem in recent. The error code 13 means, in my view point, the code/function failed to initialize (with in time).
In my case, the problem is mostly caused by either Provided module can't be loaded. (Dependencies failed) or Provided code is not a loadable module. (Main module is missing) after checking Stackdriver Logs, which point to the deploy isn't complete.
Below is how I got it to work:
# Ignore everything by default
/*
# Allow files explicitly
!config.js
!index.js
# Tried !app/ but does not work
!app/**
# Explicitly allow current folder, as Arne Jørgensen said.
!.
Be care the !app/ isn't enough. I have to use !app/** to including all files.
If it's your case, check the source tab from function console to see which files uploaded in actual.

Related

How to set silent_functions(1) as default in Octave? [duplicate]

I'm new to octave, and want to run a few commands on startup automatically every time it opens.
I typed "help startup" and saw "Octave uses the file ".octaverc". I did a bit of searching online at https://www.math.utah.edu/docs/info/octave_4.html, and saw the .octaverc file should be in the following path:
OCTAVE_HOME/lib/octave/VERSION/startup/octaverc
PROBLEM:
In that directory I don't have a startup folder, only "oct" and "site". I do see hidden files, which was my first thought since the file begins with "." character. So I then used Agent Ransack in the directory, and still nothing came up.
QUESTION:
1) Do I have to make the startup folder and octaverc file myself?
2) If so, does one, both or none have to be hidden?
3) Can it be a txt file, or does it have a special extension?
4) Do I just type the commands straight into the file or is there special formatting?
NOTE:
In case I'm going about this the wrong way, there are the operations I'd like to have run on startup:
PS1('>> '), addpath('D:\Users\Me\Desktop'), clc
Thanks ahead of time for the help!!
Possible locations (and their differences) for octaverc files are specified in the documentation.
In short, these are, from more general to specific:
octave-home/share/octave/site/m/startup/octaverc (most generic, for entire system)
octave-home/share/octave/version/m/startup/octaverc (to cover for more than one octave versions installed on the system, possibly requiring different startup scripts)
~/.octaverc (where ~ is unix-speak for a user's home directory -- covering for user-specific startup files)
.octaverc files in any directory, creating specific startup conditions for specific directories
octaverc files are effectively simple script files that are executed from most generic to most specific each time octave starts. Therefore, in the presence of conflicting commands, the more specific file can effectively be used to override the more generic behaviour.
Octave also supports (but does not recommend) the use of the startup.m file, for matlab compatibility.
You might also want to check out pathdef and savepath as well.
As a more general tip, if you ever want to search for a specific keyword from the documentation (e.g. octaverc), you can type this kind of search query in duckduckgo (or google):
octaverc site:https://octave.org/doc/interpreter/
(or just download the documentation as pdf and search the pdf)
Found the solution, the file was in the following path:
OCTAVE_HOME/share/octave/site/m/startup
to find out where OCTAVE_HOME is for you, just type "OCTAVE_HOME" into your Octave command line window.
ANSWERS:
1) You do not have to make a startup octaverc file yourself
2) The file is actually not hidden, so it should be easy to find given you're looking in the right place.
3) The file doesn't have an extension. It's just octaverc.
4) Under the last line of the existing file, you can just append commands as you would type them at the Octave command line window.
the last(7.3.0) octave version placed HERE:/ does not find the THERE:/openEMS/matlab directory even it is already loaded with octaverc or addpath. It keeps looking into the work dir where openEMS is not placed and does not recognize, for instance, the 'physical_constants.m' file.

browserstack+nightwatch custom commands configuration

I have a Nightwatch + BrowserStack configuration on my project and I'm trying to add custom commands to my project to compare 2 screenshots using resemble.js .
I configure my nightwatch.json file with this :
"custom_commands_path": "./node_modules/nightwatch/commands",
"custom_assertions_path": "./node_modules/nightwatch/assertions"
I put the commands file in the folder and I tried to run my test in every directory possible to see if it was a path problem. I've also tried with different commands, some of them I get online and even the default example one. Whatever I run it returns nameOfTheCommand is not a function. So I guess it does not even find the path to the customs commands in the nightwatch.json file.
Is there anything I'm missing here? I'm quite new so the answer could be very simple but I tried every .json file of my project in case there was a special configuration linked to BrowserStack.
Path to the custom commands should be analogous to the path to custom commands. You should point a folder where you added them.
I've found that if I put them in the suite configuration file, it picks them up:
nightwatch_config = {
src_folders: ["tests/suite/product/"],
page_objects_path: "pages/product",
custom_commands_path: "./custom_commands"
}

Using CocoaPods with Parse is giving me error ”duplicate symbol _PFConfigParametersRESTKey"?

Currently, I am in the middle of migrating my iOS app from api.parse.com to my own server. In the guide I am following, I am at the point where I need to test the app's functionality with a local Parse Server. Although, setting up a custom Parse Server requires having the latest Parse-SDK, and I am running an older version. I am trying to update my frameworks via CocoaPods. My Podfile is as follows:
# Uncomment this line to define a global platform for your project
# platform :ios, '9.0'
target 'MYAPP' do
# Uncomment this line if you're using Swift or would like to use dynamic frameworks
# use_frameworks!
# Pods for MYAPP
pod 'ParseFacebookUtilsV4'
pod 'Parse'
#pod 'ParseTwitterUtils'
pod 'ParseCrashReporting'
pod 'ParseUI'
target 'MYAPPTests' do
inherit! :search_paths
# Pods for testing
end
end
When I try running the app, I get the following error:
duplicate symbol _PFConfigParametersRESTKey in:
/Users/ME/Library/Developer/Xcode/DerivedData/MYAPP/Build/Products/Debug-iphoneos/Parse/libParse.a(PFConfig.o)
/Users/ME/Library/Developer/Xcode/DerivedData/MYAPP/Build/Products/Debug-iphoneos/Parse/libParse.a(PFConfigController.o)
duplicate symbol _PFConfigParametersRESTKey in:
/Users/ME/Library/Developer/Xcode/DerivedData/MYAPP/Build/Products/Debug-iphoneos/Parse/libParse.a(PFConfig.o)
/Users/ME/Library/Developer/Xcode/DerivedData/MYAPP/Build/Products/Debug-iphoneos/Parse/libParse.a(PFCurrentConfigController.o)
ld: 2 duplicate symbols for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
I've been searching for something online to help, but nobody seems to be experiencing this problem. I think my case is unique, as I have yet to see someone who has two duplicate errors on the same symbol pointing to the same archive(libParse.a) with one shared file (PFConfig.o) and two differing ones (PFConfigController.o and PFCurrentConfigController.o). I've implemented a variety of solutions that would generally solve this "duplicate symbol" error, but I haven't had any success.
Things I have done:
Ensured that all manually added versions of these frameworks have been removed from the project.
Scanned the project directory up and down multiple times via Finder/Command Line/grep/find and could not find any duplicated frameworks.
Ensured I did not add any "import *.m" files accidentally.
Checked for red files/duplicates in the Frameworks folder as well as the "Link Library With Libraries" section of "Build Phases."
Checked my framework, header, and other linker paths and they seem to be alright. My "Other Linker Flags" contains "$(inherited)" and a -force_load call to a third party ".a" file for analytics.
Cleared ~/Library/Developer/Xcode/DerivedData as well as removed Pods/ and ran "pod install" multiple times.
I went on to investigate the problem in the Parse files. The only place where PFConfigParametersRESTKey is defined is here and here. This seems alright since one of them is preceded by the extern keyword(reference here). I tried messing with the source files a little bit by making this variable not static and also trying to rename one of them. Nothing worked. I cannot figure out where to look to fix this.. If anybody can shed some light here I would greatly appreciate it! Thank you.
The solution to my problem was to remove an -ObjC flag from the linker config. For some reason, -ObjC was not one of the entries in the Linking section of Build Settings. The way I found it was by going to Pods/Pods-MYAPP.debug.xcconfig, and manually removing the -ObjC flag from the OTHER_LDFLAGS variable.

How to pass directives to snappy_ec2 created clusters

We have a need to set some directives in the snappy config files for the various components (servers, locators, etc).
The snappy_ec2 scripts do a good job at creating all of the config's and keeping them in sync across the cluster, but I need to find a serviceable method to add directives to the auto generated scripts.
What is the preferred method using this script?
Example: Add the following to the 'servers' file:
-gemfirexd.disable-getall-local-index=true
Or perhaps I should add these strings to an environments file such as
snappy-env.sh
TIA
-doug
Have you tried adding the directives directly in the servers (or locators or leads) file and placing this file under (SNAPPY_DIR)/ec2/deploy/home/ec2-user/snappydata/? The script would read the conf files under this dir at the time of launching the cluster.
You'll need to specify it for each server you want to launch, with the name of server as shown below. See 'Specifying properties' section in README, if you have not already done so. e.g.
{{SERVER_0}} -heap-size=4096m -locators={{LOCATOR_0}}:9999,{{LOCATOR_1}}:9888 -J-Dgemfirexd.disable-getall-local-index=true
{{SERVER_1}} -heap-size=4096m -locators={{LOCATOR_0}}:9999,{{LOCATOR_1}}:9888 -J-Dgemfirexd.disable-getall-local-index=true
If you want it to be applied for all the servers, simply put it in snappy-env.sh as you mentioned (as SERVER_STARTUP_OPTIONS) and place the file under directory mentioned above.
We could have read the conf files directly from (SNAPPY_DIR)/conf/ instead of making users copy it to above location, but we may release the ec2 scripts as a separate package, in future, so that the users do not have to download the entire distribution.

Make: Redo some targets if configuration changes

I want to reexecute some targets when the configuration changes.
Consider this example:
I have a configuration variable (that is either read from environment variables or a config.local file):
CONF:=...
Based on this variable CONF, I assemble a header file conf.hpp like this:
conf.hpp:
buildConfHeader $(CONF)
Now, of course, I want to rebuild this header if the configuration variable changes, because otherwise the header would not reflect the new configuration. But how can I track this with make? The configuration variable is not tied to a file, as it may be read from environment variables.
Is there any way to achieve this?
I have figured it out. Hopefully this will help anyone having the same problem:
I build a file name from the configuration itself, so if we have
CONF:=a b c d e
then I create a configuration identifier by replacing the spaces with underscores, i.e.,
null:=
space:= $(null) #
CONFID:= $(subst $(space),_,$(strip $(CONF))).conf
which will result in CONFID=a_b_c_d_e.conf
Now, I use this $(CONFID) as dependency for the conf.hpp target. In addition, I add a rule for $(CONFID) to delete old .conf files and create a new one:
$(CONFID):
rm -f *.conf #remove old .conf files, -f so no error when no .conf files are found
touch $(CONFID) #create a new file with the right name
conf.hpp: $(CONFID)
buildConfHeader $(CONF)
Now everything works fine. The file with name $(CONFID) tracks the configuration used to build the current conf.hpp. If the configuration changes, then $(CONFID) will point to a non-existant .conf file. Thus, the first rule will be executed, the old conf will be deleted and a new one will be created. The header will be updated. Exactly what I want :)
There is no way for make to know what to rebuild if the configuration changed via a macro or environment variable.
You can, however, use a target that simply updates the timestamp of conf.hpp, which will force it to always be rebuilt:
conf.hpp: confupdate
buildConfHeader $(CONF)
confupdate:
#touch conf.hpp
However, as I said, conf.hpp will always be built, meaning any targets that depend upon it will need rebuilt as well. A much more friendly solution is to generate the makefile itself. CMake or the GNU Autotools are good for this, except you sacrifice a lot of control over the makefile. You could also use a build script that creates the makefile, but I'd advise against this since there exist tools that will allow you to build one much more easily.