I have the following code that I would like to execute. I have tried requiring mysql and node-mysql and they both give me the same error:
Code:
var AWS = require("aws-sdk");
var mysql = require("mysql");
exports.handler = (event, context, callback) => {
try {
console.log("GOOD");
}
catch (error) {
context.fail(`Exception: ${error}`)
}
};
Error:
{
"errorMessage": "Cannot find module 'mysql'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:417:25)",
"Module.require (module.js:497:17)",
"require (internal/module.js:20:19)",
"Object.<anonymous> (/var/task/index.js:2:13)",
"Module._compile (module.js:570:32)",
"Object.Module._extensions..js (module.js:579:10)",
"Module.load (module.js:487:32)",
"tryModuleLoad (module.js:446:12)",
"Function.Module._load (module.js:438:3)"
]
}
How do I import mysql into node using lambda or get this to work?
Ohk so this is expected to happen.
The problem is that AWS Lambda runs on a different machine and there is no way you can configure that particular machine to run in a custom environment. You can however package the Node Module of mysql or node-mysql in a zip and upload to AWS Lambda. Steps are,
npm install mysql --save
Zip your folder and INCLUDING your node package
Upload this zip file as your code in AWS Lambda.
You can also take a better approach by using Serverless Framework. More info here. In this approach, you write a YAML file which contains all the details and configuration you want to deploy your lambda with. Under your lambda configuration, specify path to your node module (say, nodemodule/**) under package -> include section. This will package your required alongwith your code. Later using command line you can deploy this lambda. It uses AWS Cloudformation service and is one of most prefered way of deploying resources.
More information on packaging using Serverless Framework can be found here.
Note: To use serverless framework there couple of steps like getting API keys for your user, setting right permissions in IAM etc. These are just initial setup and won't be need later. Do perform those prior to deploying using serverless framework.
Hope this helps!
In case any body needs an alternative,
You can use the cloud9 IDE which is free to open the lambda function and execute the npm init using the terminal window against the lambda function folder this will provide the node package file, which then can be used to install dependencies.
if using package.json, simply add below and run "npm install"
{
"dependencies": {
"mysql": "2.12.0"
}
}
I experienced this when using knex, although I had mysql in my package.json.
I had to require('mysql') in my lambda (or a file it references) so that Serverless packages it during deployment.
Related
I tried to Creating and Deploying Oracle Cloud Functions by following the official documentation instructions. I can create and deploy using java runtime but when I deploy go runtime always return error.
I tried to init Go function using this command in Oracle Cloud Shell:
fn init --runtime go hello-go
then I tried to deploy it
fn -v deploy --app test
but it returned error like below:
Deploying hello-go to app: test
Bumped to version 0.0.7
Building image bom.ocir.io/bmptwl2psusa/repo/hello-go:0.0.7
FN_REGISTRY: bom.ocir.io/bmptwl2psusa/repo
Current Context: ap-mumbai-1
Sending build context to Docker daemon 5.632kB
Step 1/10 : FROM fnproject/go:dev as build-stage
---> 96c8fb94a8e1
Step 2/10 : WORKDIR /function
---> Using cache
---> 8961dd299ec1
Step 3/10 : WORKDIR /go/src/func/
---> Using cache
---> 5a4c2c6e13f1
Step 4/10 : ENV GO111MODULE=on
---> Using cache
---> 22022ff2fcf8
Step 5/10 : COPY . .
---> 714622a6ff03
Step 6/10 : RUN cd /go/src/func/ && go build -o func
---> Running in 39fedbc476f4
build func: cannot find module for path github.com/fnproject/fdk-go
The command '/bin/sh -c cd /go/src/func/ && go build -o func' returned a non-zero code: 1
Fn: error running docker build: exit status 1
When I'm using java runtime with fn init --runtime java hello-java command, it's successfully deployed, Why always fail when using go?
I tried to run go build -o func in hello-go directory but it's returned:
go: finding module for package github.com/fnproject/fdk-go
go: writing stat cache: mkdir /usr/share/gocode/pkg: permission denied
go: downloading github.com/fnproject/fdk-go v0.0.3
func.go:10:2: mkdir /usr/share/gocode/pkg: permission denied
I know it happened because /usr/share/gocode/ directory is under root user, but I dont know how to change the permission on that folder because Oracle Cloud Shell can not use root user or sudo. (based on this answer)
Maybe I can do it if I use real VM shell or local shell/terminal, but I want to use Oracle Cloud Shell because I just followed official instructions that suggested me using Oracle Cloud Shell, so how to deploy Oracle Cloud Functions with Go runtime using Oracle Cloud Shell?
Mostly the official documentations only give the examples using Java runtime, that make me paranoid when using go.
This is a bug in cloudshell that we are figuring out the best way to solve.
As a short-term workaround you can do this once:
mkdir ${HOME}/gopath
Then set this in your terminal:
export GOPATH=${HOME}/gopath
You should probably edit your ~/.bashrc to set the GOPATH variable automatically so you don't forget
I used following article to execute SSIS package parallel.https://www.sqlservercentral.com/articles/importing-files-in-parallel-with-ssis. In this article it explained execute a package from a folder location. In my situation I am deploying both packages. I tried following code:
Application app = new Application();
Package pkg = app.LoadFromSqlServer(dtsxPackage, "localhost",null, null, null);
I am getting error
Cannot find folder "Package name"
Package deployment is as follows.
Using "ParallelExecusion.dtsx" I am try to execute the "FileSync.dtsx" package. I am setting the package path as "FileSync\TeamR\FileSync.dtsx"
The code shown is for loading a package that is stored in the SQL Server database named msdb. It will use the binaries in sys.dtspackages90 or sys.ssispackages (table names approximate) but that only works for packages developed and deployed under the Package Deployment Model (2005-2008R2) or projects that are explicitly defined as such for SQL Server 2012+
What your screenshot shows is a Project Deployment Model which is a .ispac deployed to the SSISDB database. While that package is on SQL Server, you do not use the LoadFromSqlServer method. Instead, you're going to use the same-ish methods that the CLR methods in the database use.
CreateSsisServerExecution
Set any Parameter/Property values
Start
Personally, unless I had a strong use case that I needed to control every aspect of the package execution, I'd just use TSQL here (and remove class dependencies in your code) to Execute SSIS Packages
This is related to developing extension for Windows Admin Center. There is SDK provided for the same by Microsoft to develop extensions. here is detail documentation which I was following "https://learn.microsoft.com/en-us/windows-server/manage/windows-admin-center/extend/developing-extensions"
Create tool extension:
Referring to section "Prepare your development environment" I have installed prerequisites.
After that I tried to next step to create tool by using Windows Admin Center CLI. I executed following command
wac create --company "Contoso Inc" --tool "Manage Foo Works"
But system gives following error
const { readdir, stat } = require('fs').promises;
TypeError: Cannot destructure property readdir of 'undefined' or 'null'.
Is there something missing while creating development environment.
Environment details
Windows 10 Professional,
npm#6.9.0,
node#v9.11.1,
angular cli: 6.1.5,
typescript 2.9.2
This is ES6 Destructive Assignment
It will need some default value. So use like this
const { readdir, stat } = require('fs').promises || {};
update-version.js can be edited and you could find this at
C:\Users\\AppData\Roaming\npm\node_modules\windows-admin-center-cli\src\update-version.js
Refer following link to know more about
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment
This issue is same as
JS/ES6: Destructuring of undefined
I'm running an EMR with a spark cluster on AWS.
Spark version is 1.6
When running the folllowing command:
proxy = sqlContext.read.load("/user/zeppelin/ProxyRaw.csv",
format="com.databricks.spark.csv",
header="true",
inferSchema="true")
I get the following error:
Py4JJavaError: An error occurred while calling o162.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at
http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
How can I solve this? I assume I should add a package but how do I install it and where?
There is many way to add packages in Zeppelin :
One of them is to actually change the conf/zeppelin-env.sh configuration file adding the packages you need e.g com.databricks:spark-csv_2.10:1.4.0 in your case to the submit options since Zeppelin uses the spark-submit command under the hood :
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.4.0"
But let's say that you don't have actually access to those configuration. You can then use Dynamic Dependency Loading via %dep interpreter (deprecated) :
%dep
z.load("com.databricks:spark-csv_2.10:1.4.0")
This will require that you load the dependencies before launching or restarting the interpreter.
Another way to do it is do add the dependency you need via the interpreter dependency manager as described in the following link : Dependency Management for Interpreter.
Well,
First you need to download the CSV liv from Maven repository:
https://mvnrepository.com/artifact/com.databricks/spark-csv_2.10/1.5.0
Check the scala version that you are using. If is 2.10 or 2.11.
When you call spark-shell our spark-submit or pyspark. Or even your Zeppelin you need to add the option --jars and the path to your lib.
Like this:
pyspark --jars /path/to/jar/spark-csv_2.10-1.5.0.jar
Than you can call it as you did above.
You can see other close issue here: How to add third party java jars for use in pyspark
I am trying to deploy a dancer based application on openshift. I am unable to workaround teh following issues.
How do I get dancer to use the openshift environment variables e.g. OPENSHIFT_MYSQL_DB_HOST or OPENSHIFT_DATA_DIR. Putting them in the config.yml files is not working i tried $OPENSHIFT_DATA_DIR and $ENV{OPENSHIFT_DATA_DIR} overriding them in the application code is not working...
Does openshift store the console log somewhere? rhc tail does not provide the complete output...
Is it possible to run the app on the server from a ssh shell? I tried it but am getting a permission denied error
Dancer is a perl based web framework. see https://metacpan.org/pod/Dancer::Cookbook
I am not sure what a dancer application is, but for a java application with a mysql db on openshift, you access it with the following code
Import the following:
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.sql.DataSource;
then use the following as your connection string:
InitialContext ic = new InitialContext();
Context initialContext = (Context) ic.lookup("java:comp/env");
DataSource dataSource = (DataSource) initialContext.lookup("jdbc/MysqlDS");
Connection connection = dataSource.getConnection();
I hope this helps.