Looking to pass an alchemy rpc endpoint url that contains its api key and is stored as an environment variable in a solidity test file executed using forge test. I want to do so in order to use it to fork goerli testnet and potentially be able to manage different forks in the same test context. The vm cheatcode below creates a local fork of goerli at block_number:
vm.createSelectFork($GOERLI_ALCHEMY_SECRET_URL, block_number);
How do I pass it $GOERLI_ALCHEMY_SECRET_URL from the environment?
In order to pass an environment variable into a test file you can make use of vm.envString():
vm.createSelectFork(vm.envString("GOERLI_ALCHEMY_SECRET_URL"), block_number);
Related
I am working on externalizing our IScheduledExecutorService so I can run tasks externally on a external cluster. I am able to write a test and get the Runnable to actually run ONLY if I turn on UserCode deployment. If I want to change this task at all and run the tests again I get the below in my external cluster member's logs..
java.lang.IllegalStateException: Class com.mycompany.task.ScheduledTask is already in local cache and has conflicting byte code representation
I want to be able to change the task if I could and redeploy to Hazelcast to just handle it. I do this kind of thing with our external maps now. It can handle different versions of our objects using compact serialization.
Am I stuck using user code deployment for these functional objects? If I need to make a change to it I need to change the class name and redeploy to production. I'm hoping to get this task right the first time and not have to ever do that but I have a way of handling it if I do.
The cluster is already running in production and I'll have to add the following to each member
HZ_USERCODEDEPLOYMENT_ENABLED=true
and the appropriate client code(listed below) to enable this.
What I've done...
Added the following to my local docker file
HZ_USERCODEDEPLOYMENT_ENABLED=true
and also in the code that creates a hazelcast client connecting to my external cluster with
ClientConfig clientConfig = new ClientConfig(); ClientUserCodeDeploymentConfig clientUserCodeDeploymentConfig = new ClientUserCodeDeploymentConfig(); clientUserCodeDeploymentConfig.addClass("com.mycompany.task.ScheduledTask"); clientUserCodeDeploymentConfig.setEnabled(true); clientConfig.setUserCodeDeploymentConfig(clientUserCodeDeploymentConfig);
However, if I remove those two pieces I get the following Exception with a failing test. It doesn't know about my class at all.
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.mycompany.task.ScheduledTask
Side Note:
We are using compact serialization for several maps already and when I try to configure this Runnable task via compact serialization I get the below error. I don't think that's the right approach either.
[Scheduler: myScheduledExecutorService][Partition: 121][Task: 7afe68d5-3185-475f-b375-5a82a7088de3] Exception occurred during run
java.lang.ClassCastException: class com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord cannot be cast to class java.lang.Runnable (com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord is in unnamed module of loader 'app'; java.lang.Runnable is in module java.base of loader 'bootstrap')
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.call(ScheduledRunnableAdapter.java:49) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.scheduledexecutor.impl.TaskRunner.call(TaskRunner.java:78) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.internal.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:64) ~[hazelcast-5.2.0.jar:5.2.0]
I'm trying to fetch my variables from CSV Data config and add them to my backend listener in a distributed testing environment like this. FYI, it works on my local machine.
Here is my test plan:
Test Plan
CSV Data Config:
CSV config
My csv looks like this:
SELECT count(*) FROM github_events;simpleQuery
SELECT count(*) FROM github_events;medium
SELECT count(*) FROM github_events;complexQuery
SELECT count(*) FROM github_events;simpleQuery
Backend Listener:
Backend Listener
I'm setting the CSV config variables in the beanshell pre-processor like this:
props.put("query", "${QUERY}");
props.put("query_type", "${QUERY_TYPE}");
and that's why I have the ${__P(query)} ${__P(query_type)} in the backend listener.
The goal is to grab the QUERY and QUERY_TYPE from the CSV data config and send it to the backend listener.
Any help would be appreciated. Let me know if I need to add more info on here. Thank you!
Solution:
How I got this to work... kind of hacky but it'll work for what I need:
I created a JSR223 Postprocessor on my JDBC Request and added the following code:
import groovy.json.*
def my_query = vars.get("QUERY")
def my_query_type = vars.get("QUERY_TYPE")
json = JsonOutput.toJson([myQuery: my_query, myQueryType: my_query_type])
prev.setSamplerData(groovy.json.JsonOutput.prettyPrint(groovy.json.JsonOutput.toJson(json)))
This won't work if you need whatever is in your response Data but in my case, it was okay to replace. BTW, this only works with my distributed test. To make it work locally, you use prev.setResponseData instead. Hope this helps someone.
I don't think you can, as per JMeter 5.4.1 all fields of the Backend Listener are being populated in "testStarted" phase
the same applies to your custom listener
it means that JMeter Variables originating from the CSV Data Set Config don't exist at the time the Backend Listener is being initialized and your reference to JMeter Properties returns the default value of 1 as there are no such variables.
If you're looking for the possibility to dynamically send metrics to Azure you will need to replicate the code from the Azure Backend Listener in JSR223 Listener using Groovy language.
The only way how this could work on your local machine is that:
You run your test plan in GUI mode 1st time - it fails, but it sets the properties
You run your test plan in GUI mode 2nd time - it passes but uses the last values of the properties
etc.
I want to deploy sklearn model in sagemaker. I created a training script.
scripPath=' sklearn.py'
sklearn=SKLearn(entry_point=scripPath,
train_instance_type='ml.m5.xlarge',
role=role, output_path='s3://{}/{}/output'.format(bucket,prefix), sagemaker_session=session)
sklearn.fit({"train-dir' : train_input})
When I deploy it
predictor=sklearn.deploy(initial_count=1,instance_type='ml.m5.xlarge')
It throws,
Clienterror: An error occured when calling the CreateModel operation:Could not find model data at s3://tree/sklearn/output/model.tar.gz
Can anyone say how to solve this issue?
When deploying models, SageMaker looks up S3 to find your trained model artifact. It seems that there is no trained model artifact at s3://tree/sklearn/output/model.tar.gz. Make sure to persist your model artifact in your training script at the appropriate local location in docker which is /opt/ml/model.
for example, in your training script this could look like:
joblib.dump(model, /opt/ml/model/mymodel.joblib)
After training, SageMaker will copy the content of /opt/ml/model to s3 at the output_path location.
If you deploy in the same session a model.deploy() will map automatically to the artifact path. If you want to deploy a model that you trained elsewhere, possibly during a different session or in a different hardware, you need to explicitly instantiate a model before deploying
from sagemaker.sklearn.model import SKLearnModel
model = SKLearnModel(
model_data='s3://...model.tar.gz', # your artifact
role=get_execution_role(),
entry_point='script.py') # script containing inference functions
model.deploy(
instance_type='ml.m5.xlarge',
initial_instance_count=1,
endpoint_name='your_endpoint_name')
See more about Sklearn in SageMaker here https://sagemaker.readthedocs.io/en/stable/using_sklearn.html
Is there an "official" solution for passing sensitive information, such as API keys, to Google Cloud Functions? In particular it would be nice to avoid passing this information as arguments to the function since it will be the same for every invocation. AWS Lambda has a built-in solution using encrypted environment variables for this. Is there some similar approach for Google Cloud Functions?
I could imagine using a cloud storage bucket or cloud datastore for this, but that feels very manual.
If you're using Cloud Functions with Firebase, you're looking for environment configuration.
With that, you deploy configuration data from the Firebase CLI:
firebase functions:config:set someservice.key="THE API KEY" someservice.id="THE CLIENT ID"
And then read it in your functions with:
functions.config().someservice.id
You can use Google Secret Manager.
https://cloud.google.com/secret-manager/docs
See this article for an example:
https://dev.to/googlecloud/using-secrets-in-google-cloud-functions-5aem
The other answers are outdated, since firebase-functions v3.18.0 the recommended way is to use secrets (like env but passed explicitly to specific functions and without .env but a remote config) : https://firebase.google.com/docs/functions/config-env?hl=en#secret-manager
Before environment variable support was released in firebase-functions v3.18.0, using functions.config() was the recommended approach for environment configuration. This approach is still supported, but we recommend all new projects use environment variables instead, as they are simpler to use and improve the portability of your code.
You can use it like this:
firebase functions:secrets:set MY_SECRET
And answer a value to the CLI.
And then in your function:
exports.processPayment = functions
// Make the secret available to this function
.runWith({ secrets: ["MY_SECRET"] })
.onCall((data, context) => {
// Now you have access to process.env.MY_SECRET
});
Few options that you can go about this:
GCP Runtime Configuration. Set up a schema that can be only accessed by your service account and put your secret there. You do that prior to your deployment. Your app should be able to use Runtime Configuration API to access these. You can use this nifty library: https://www.npmjs.com/package/#google-cloud/rcloadenv
Generate a JS on the fly including that information as part of your cloud function build/deploy and include that file as part of you cloud function.
Use Google KMS to store the keys and access it with KMS's API
As for now, there's no way to do this.
https://issuetracker.google.com/issues/35907643
At our organization we are following this DSL model Domain specific language and stuff where users can write tests from a spreadsheet and the underlying java code understands and executes those instructions.
Now here is the problem.
We have a single test method in our class which uses a data provider, reads all the test methods from the file and executes the instructions.
Naturally, when surefire executes and prints results it says:
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Is there a way to manipulate this in TestNG such that each custom test metod from excel can be picked up by the system as a legitimate test method when the overall suite executes.
I actually made the group migrate from Junit to TestNG and they are questioning if the DataProvider feature can handle that and i have no response for it :(
So essentially we want to break bindings between java methods by using external data providers but at the same time preserve the number of test methods executed as provided in an excel spreadsheet.
If you can give me any direction it would be most helpful to me.
Attaching my spreadsheet here.
My java file has only 1 test method:
#test
RunSuite(){
// Read each test method from file, i want the build server to recognize them someway as a individual test methods
}