Chrome Failed to Start Native Messaging Host on Mac - google-chrome

Just to preface, I'm very new to all of this so please let me know if I'm missing anything fundamental or if I'm even going about this the right way.
I'm trying to make a chrome extension that reads the sender of an email in Gmail, then starts a python script (fmconnect.py) and sends the name of that sender to the script. I am able to retrieve the name of the sender using gmail.js, but when I try to send it to the python script, I keep receiving the error:
Unchecked runtime.lastError: Failed to start native messaging host.
Here are all the relevant files:
background.js
chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
console.log(request);
sendResponse({message: "Message received"});
if (request.length > 0) {
const hostName = "com.google.chrome.example.echo";
var port = chrome.runtime.connectNative(hostName);
if (port) {
port.postMessage({text: request});
}
}
return true;
});
relevant part of manifest.json
"permissions": [
"https://*/*",
"http://localhost/",
"nativeMessaging",
"background",
"tabs"
]
fmconnect.py
def read_thread_func():
text_length_bytes = sys.stdin.read(4)
sys.stdout.write(text_length_bytes)
def Main():
read_thread_func()
sys.exit(0)
if __name__ == '__main__':
Main()
com.google.chrome.example.echo.json
{
"name": "com.google.chrome.example.echo",
"description": "Chrome Native Messaging API Host",
"path": "HOST_PATH",
"type": "stdio",
"allowed_origins": [
"chrome-extension://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/"
]
}
install_host.sh
set -e
DIR="$( cd "$( dirname "$0" )" && pwd )"
if [ "$(uname -s)" = "Darwin" ]; then
if [ "$(whoami)" = "root" ]; then
TARGET_DIR="/Library/Google/Chrome/NativeMessagingHosts"
else
TARGET_DIR="$HOME/Library/Application Support/Google/Chrome/NativeMessagingHosts"
fi
else
if [ "$(whoami)" = "root" ]; then
TARGET_DIR="/etc/opt/chrome/native-messaging-hosts"
else
TARGET_DIR="$HOME/.config/google-chrome/NativeMessagingHosts"
fi
fi
HOST_NAME=com.google.chrome.example.echo
# Create directory to store native messaging host.
mkdir -p "$TARGET_DIR"
# Copy native messaging host manifest.
cp "$DIR/$HOST_NAME.json" "$TARGET_DIR"
# Update host path in the manifest.
HOST_PATH=$DIR/fmconnect.py
ESCAPED_HOST_PATH=${HOST_PATH////\\/}
sed -i -e "s/HOST_PATH/$ESCAPED_HOST_PATH/" "$TARGET_DIR/$HOST_NAME.json"
# Set permissions for the manifest so that all users can read it.
chmod o+r "$TARGET_DIR/$HOST_NAME.json"
echo "Native messaging host $HOST_NAME has been installed."
I know that on the native messaging host documentation, it says to check whether I have sufficient permissions to start the file. How exactly would I do this?

Related

Some internet url or IP are not reachable through Cloud Function

I tried to make some http requests through GCP's Cloud Function with Python 3.10 runtime, some went well, and some went wrong.
To find out the reason, I ping the IP of each url, and the IP I need has no response:
ping ip(173.194.217.106) of url(https://www.google.com): True
ping ip(69.147.92.11) of url(https://www.yahoo.com): True
ping ip(13.107.42.14) of url(https://www.linkedin.com): True
ping ip(117.56.7.114) of url(https://data.moi.gov.tw): False
Is there any way to make a successful request to https://data.moi.gov.tw through Cloud Function?
Here are the materials to reproduce the results with Cloud Function (Gen1):
main.py:
import platform # For getting the operating system name
import subprocess # For executing a shell command
import requests
def ping(host):
"""
Returns True if host (str) responds to a ping request.
Remember that a host may not respond to a ping (ICMP) request even if the host name is valid.
"""
# Option for the number of packets as a function of
param = '-n' if platform.system().lower()=='windows' else '-c'
# Building the command. Ex: "ping -c 1 google.com"
command = ['ping', param, '1', host]
return subprocess.call(command) == 0
def main(event):
d_ip_url = {
'173.194.217.106' : 'https://www.google.com',
'69.147.92.11' : 'https://www.yahoo.com',
'13.107.42.14' : 'https://www.linkedin.com',
'117.56.7.114' : 'https://data.moi.gov.tw',
}
for ip, url in d_ip_url.items():
print(f'ping ip({ip}) of url({url}):', ping(ip))
requirements.txt:
# Function dependencies, for example:
# package>=version
requests
The cloud Function settings:
{
"name": "projects/corgis-361708/locations/asia-east1/functions/ping-test",
"httpsTrigger": {
"url": "https://asia-east1-corgis-361708.cloudfunctions.net/ping-test",
"securityLevel": "SECURE_ALWAYS"
},
"status": "ACTIVE",
"entryPoint": "main",
"timeout": "60s",
"availableMemoryMb": 256,
"serviceAccountEmail": "corgis-361708#appspot.gserviceaccount.com",
"updateTime": "2022-09-21T06:04:31.746Z",
"versionId": "2",
"labels": {
"deployment-tool": "console-cloud"
},
"sourceUploadUrl": "https://storage.googleapis.com/uploads-918581105162.asia-east1.cloudfunctions.appspot.com/78ad8f77-d16c-412c-843f-51238703fbbf.zip",
"runtime": "python310",
"maxInstances": 1,
"ingressSettings": "ALLOW_ALL",
"buildId": "0c8bf5d0-3467-4516-8fea-c39d0e093c2e",
"buildName": "projects/647355426154/locations/asia-east1/builds/0c8bf5d0-3467-4516-8fea-c39d0e093c2e",
"dockerRegistry": "CONTAINER_REGISTRY"
}

Why can I install this Jelastic manifest through the dashboard import function but not throuhg the Jelastic API?

I have the following very simple manifest:
type: install
name: very simple manifest
onInstall:
- log: installing manifest
I can install it from the Jelastic Dashboard. There is an import function in the main menu where I can copy / paste that manifest content and it gets installed. In the Jelastic console, I can see
[15:36:38 manifest.settings]: BEGIN INSTALLATION: very simple manifest
[15:36:39 manifest.settings]: BEGIN HANDLE EVENT: {"topic":"application/install","envAppid":""}
[15:36:39 manifest.settings:1]:> installing manifest
[15:36:39 manifest.settings]: END HANDLE EVENT: application/install
[15:36:39 manifest.settings]: END INSTALLATION: very simple manifest
and the Jelastic dashboard confirms installation.
Now, when I do the same, but via the Jelastic REST API, i.e. using the endpoint
http://my-jelastic-provide.com/1.0/marketplace/jps/REST/install
with the relevant data, then, it doesn't install. Instead, I get the strange error message
Can\'t find environment by domain [jelasticclient-master-0954606]
where jelasticclient-master-0954606 is the envName I set.
However, if I change my manifest to e.g.
type: install
name: very simple manifest
nodes:
count: 1
cloudlets: 4
nodeGroup: cp
image: alpine:latest
skipNodeEmails: true
onInstall:
- log: installing manifest
then it installs perfectly. What am I missing?
I am using Jelastic v6.0.2.
Your "very simple manifest" doesn't suppose any environment name to be passed.
That's why when you pass it you get an error "Can't find environment by domain [domain-name]" (Example1).
When you don't have the "nodes" parameter in the manifest (as in your second example), you shouldn't pass any environment name (Example2) or should pass the existing environment name (response is in Example3).
Example1:
curl -X POST 'https://jca.host-domain/1.0/marketplace/jps/rest/install' \
-d 'envName=jelasticclient-master-0954606' \
-d session=*** \
-d skipNodeEmails=1 \
-d ownerUid=UID \
--data-urlencode 'jps={ "type": "install", "name": "very simple manifest", "onInstall": [ { "log": "installing manifest" } ] }'
The response is:
{"result":11,"response":{"result":11,"source":"JEL","error":"domain [jelasticclient-master-0954606] doesn't exist"},"source":"JEL","error":"domain [jelasticclient-master-0954606] doesn't exist"}
When the environment name is not passed (Example2),
curl -X POST 'https://jca.host-domain/1.0/marketplace/jps/rest/install' \
-d session=*** \
-d skipNodeEmails=1 \
-d ownerUid=UID \
--data-urlencode 'jps={ "type": "install", "name": "very simple manifest", "onInstall": [ { "log": "installing manifest" } ] }'
the response is
{"result":0,"uniqueName":"3c819586-2ef7-4691-9faa-d3059459d20e","response":{"result":0,"uniqueName":"3c819586-2ef7-4691-9faa-d3059459d20e","successText":"","appid":""},"appid":"","successText":""}
When the environment with envName=jelasticclient-master-0954606 already exists, the response of the same request from the Example1 is as this (Example3)
{"result":0,"uniqueName":"b52a8db9-8850-4b66-958a-3dee3345b923","response":{"result":0,"uniqueName":"b52a8db9-8850-4b66-958a-3dee3345b923","successText":"","appid":"7b0c465f6c9573b8d8ce3ed59591781b"},"appid":"7b0c465f6c9573b8d8ce3ed59591781b","successText":""}
In other words, if you pass the environment name when deploying this "very simple manifest" this manifest is installed like an add-on because there is no "nodes" parameter in it but there is no existing environment "jelasticclient-master-0954606" to install this "add-on".

LoopBack does not read port property from config.json (or other config files)

So I want to run multiple LoopBacks listening different ports (makes dev easier). I can achieve this by using PORT=808x node ., but I would prefer a configured alternative.
When I tried to use the configs, I noticed strange behavior. Other configs, such as the restApiRoot matches whatever I write for it in the server/config.json, but the port is always 8080, unless I use env variables or such. I checked the documentation for all configuration files LoopBack reads, non of them has new value for port. Where does that port value come from? How can I force it to use the one in server/config.json or in similar official configuration files?
UPDATE: My server/server.js server/config.json and package.json files
When I start this with node . command, the port variable is 8080 instead of 8082 and when I wget, the response (404) comes from 8080 and 8082 gives no response, as there is no server serving that port.
package.json
{
"name": "external-server",
"version": "1.0.0",
"main": "server/server.js",
"scripts": {
"pretest": "jshint ."
},
"dependencies": {
"compression": "^1.0.3",
"cors": "^2.5.2",
"loopback": "^2.22.0",
"loopback-boot": "^2.6.5",
"loopback-component-explorer": "^2.1.0",
"loopback-connector-mysql": "^2.4.1",
"loopback-datasource-juggler": "^2.39.0",
"serve-favicon": "^2.0.1"
},
"devDependencies": {
"jshint": "^2.5.6"
}
}
server/server.js
var loopback = require('loopback');
var boot = require('loopback-boot');
var app = module.exports = loopback();
app.start = function() {
// start the web server
return app.listen(function() {
app.emit('started');
console.log(app.get('port'))
var baseUrl = app.get('url').replace(/\/$/, '');
console.log('Web server listening at: %s', baseUrl);
if (app.get('loopback-component-explorer')) {
var explorerPath = app.get('loopback-component-explorer').mountPath;
console.log('Browse your REST API at %s%s', baseUrl, explorerPath);
}
});
};
// Bootstrap the application, configure models, datasources and middleware.
// Sub-apps like REST API are mounted via boot scripts.
boot(app, __dirname, function(err) {
if (err) throw err;
// start the server if `$ node server.js`
if (require.main === module)
app.start();
});
server/config.json
{
"restApiRoot": "/api",
"host": "0.0.0.0",
"port": 8082
}
Okay, it seems that when I run LoopBack with sudo, the configuration files are applied. Very confusing.
So the command node . will lead to wrong port and sudo node . will read the port from the server/config.json. It seems like other configuration parameters are read correctly, even without sudo, for some reason PORT is a special case. There is a realted answer, which shows that this is a NodeJS + Express issue, not a LoopBack issue.

can't watch multiple files with json-server

I've read about Fake json-server and I'd like to watch more than 1 file.
In the instructions it is listed
--watch, -w
Watch file(s)
but I'm not able to make it working if I launch it as
json-server -w one.json two.json more.json
create files as shown below
db.js
var firstRoute = require('./jsonfile1.json');
var secondRoute = require('./jsonfile2.json');
var thirdRoute = require('./jsonfile3.json');
var fourthRoute = require('./jsonfile4.json');
// and so on
module.exports = function() {
return {
firstRoute : firstRoute,
secondRoute : secondRoute,
thirdRoute : thirdRoute,
fourthRoute : fourthRoute
// and so on
}
}
server.js
var jsonServer = require('json-server')
var server = jsonServer.create()
var router = jsonServer.router(require('./db.js')())
var middlewares = jsonServer.defaults()
server.use(middlewares)
server.use(router)
server.listen(3000, function () {
console.log('JSON Server is running')
})
Now go to the directory where you have created both these files and open command line and run the code below
node server.js
That's it now go to the browser and go to localhost:3000 ,You shall see the routes that created for different files,you may use it directly.
You can open multipe port for differents json files with json-server.In my case
I open multiples cmd windows and launch it as.
json-server --watch one.json -p 4000
json-server --watch two.json -p 5000
json-server --watch more.json -p 6000
One for cmd window, this work for me.
It can only watch one file. You have to put the all info you need into the same file. So if you need cars for one call and clients for another, you would add a few objects from each into one file. It's unfortunate, but it's just supposed to be a very simple server.
1 - create database file e.g db.json
{
"products": [
{
"id": 1,
"name": "Caneta BIC Preta",
"price": 2500.5
},
],
"users":[
id:1,
name:"Derson Ussuale,
password: "test"
]
}
2 - In Package.json inside script
"scripts": {
"start": "json-server --watch db.json --port 3001"
},
3 - Finally Run command > npm start
Resources
http://localhost:3001/products
http://localhost:3001/users
Home
http://localhost:3001
You can do that with this things:
Step 1: Install concurrently
npm i concurrently --save-dev
Step 2: Create multiple json file, for an example db-users.json, db-companies.json and so on, and so on
Step 3: Add command line to your package.json scripts, for an example:
servers: "concurrently --kill-others \"json-server --host 0.0.0.0 --watch db-users.json --port 3000\" \"json-server --host 0.0.0.0 --watch db-companies.json --port 3001\""
Step 4: Now you can run npm run servers to run your multiple json file.
After that, you can access your server with: localhost:3000 and localhost:3001 or with your network ip address.
Note: You can add more files and more command into your package.json scripts.
That's it.
As you can watch only one file simulteniously, because it is database, you can first read the database file and then append new data to database JSON:
const mockData = jsf(mockDataSchema);
const dataBaseFilePath = path.resolve(__dirname, {YOUR_DATABASE_FILE});
fs.readFile(dataBaseFilePath, (err, dbData) => {
const json = JSON.parse(dbData);
resultData = JSON.stringify(Object.assign(json, mockData));
fs.writeFile(dataBaseFilePath, resultData, (err) => {
if (err) {
return console.log(err);
}
return console.log('Mock data generated.);
});
});

Problems with cosmos auth and Identity manager integration

I want to integrate cosmos-auth with Idm GE.
Config for node.js application is:
{
"host": "192.168.4.180",
"port": 13000,
"private_key_file": "key.pem",
"certificate_file": "cert.pem",
"idm": {
"host": "192.168.4.33",
"port": "443",
"path": "/oauth2/token"
},
"cosmos_app": {
"client_id": "0434fdf60897479588c3c31cfc957b6d",
"client_secret": "a7c3540aa5de4de3a0b1c52a606b82df"
},
"log": {
"file_name": "/var/log/cosmos/cosmos-auth/cosmos-auth.log",
"date_pattern": ".dd-MM-yyyy"
}
}
When i send HTTP POST request directly to IDM GE to url
https://192.168.4.33:443/oauth2/token
with required parameters i get ok results:
{
access_token: "LyZT5DRGSn0F8IKqYU8EmRFTLo1iPJ"
token_type: "Bearer"
expires_in: 3600
refresh_token: "XiyfKCHrIVyludabjaCyGqVsTkx8Sf"
}
But when i curl the cosmos-auth node.js application
curl -X POST "https://192.168.4.180:13000/cosmos-auth/v1/token" -H
"Content-Type: application/x-www-form-urlencoded" -d
"grant_type=password&username=idm&password=idm" -k
I get next result:
{"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred"}
Has anyone encountered something similar?
What could be the problem?
The error i made was using unsigned certificate.How clumsy of me.
So either sign the certificate or insert additional element in options object (rejectUnauthorized: false)
var options = {
host : host,
port : port,
path : path,
method : method,
headers: headers,
rejectUnauthorized: false
};
or in the beginning of the file insert:
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';
Ofcourse this is only temporary solution until we use fully signed cert.
Anyways error handling and logs in cosmos-auth node.js app should show a little bit more.