This works fine:
var resp = UrlFetchApp.fetch("someremotehost/SomeFile.csv");
resp.getResponseCode(); //returns 200
resp.getContentText(); //returns the data
however, on my local machine, I'm running xampp with SomeFile.csv is located in htdocs/dev but I cannot get it to work on localhost:
var resp = UrlFetchApp.fetch("localhost/dev/SomeFile.csv");
resp.getResponseCode(); //returns 0.0
resp.getContentText() //returns nothing!
I checked with chrome-extension postman and http://localhost/dev/SomeFile.csv works fine, so why does UrlFetchApp.fetch("http://localhost/dev/SomeFile.csv") not work?
That wont work because apps script executes code server side (in google servers). The only way to do this is to make an htmlService app and use ajax from the frontend.
Use Proxy. I had the same issue, where my app was on AWS ec2.
For that, I used nginx as proxy server and it worked.
Related
I am developing web crawler which could render Javascript websites and so I decided to use PupeeteerSharp, a .NET port of popular Node.JS headless Chrome browser Pupeeteer API. I am running Service Fabric's local development cluster on Windows 10 development machine and have one stateless service in my solution.
I've created Data folder under Service project's PackageRoot folder and put .local-chromium folder contents there (contains chrome.exe executable) so it deploys as independent data package of service.
I've also placed this XML config line in ServiceManifest.xml file:
<DataPackage Name="Data" Version="1.0.0" />
So far it looks good and headless browser content is copied to SFCluster Data package directory properly.
Then in my Stateless Service code I try to call Pupeeteer chromium executable as follows:
var browser = await Puppeteer.LaunchAsync(new LaunchOptions
{
Headless = true,
ExecutablePath = _chromiumPath // #$"{context.CodePackageActivationContext.GetDataPackageObject("Data").Path}\.local-chromium\Win64-706915\chrome-win\chrome.exe"
});
using (var page = (await browser.NewPageAsync()))
{
Response renderResponse;
try
{
renderResponse = await page.GoToAsync(webPage.AbsoluteUri, timeout);
if (renderResponse.Status != System.Net.HttpStatusCode.OK)
{
return new RenderResult(RenderStatus.OtherFailure);
}
// other code
}
catch (TimeoutException)
{
return new RenderResult(RenderStatus.Timeouted);
}
In this line: using (var page = (await browser.NewPageAsync())) my code (Thread) simply hangs without returning, in Debug console I see many thread exits, but no exception occurs. I was previously getting System.IO.FileNotFoundException when I was fixing some other errors regarding appropriate copying of chromium folder contents, but now these errors are gone so it seems that code find .exe but somehow cannot start headless mode of PupeeterSharp.
Does that mean that I cannot simply run external .exe chromium binary with Service Fabric's Native Application Model? Should I use Docker and Linux containers instead?
I am using the gulp-aemsync plugin to sync my css and js changes to a clientlib on an AEM instance. A have a gulp task watching the js and css that runs gulp-aemsync fine (changes are on the site when i refresh), but being a bit lazy as i am it would be nice to get live reload working so that i never have to manually refresh the page while working.
I have tried to follow both these 2 online guides:
https://adobe-consulting-services.github.io/acs-aem-tools/features/live-reload/index.html
https://www.cognifide.com/our-blogs/cq/up-and-running-with-livereload-in-adobe-aem6
Followed the steps of:
installing Netty package on AEM instance
installing ACS AEM tools package on the AEM instance
installing the RemoteLiveReload chrome extension (the AEM instance is hosted on AWS)
That didn't work, so i got one of our DevOps engineers to open port 35729 (which is the default for Livereload) on the AEM instance. That still doesn't work, and when i click the chrome browser extension to sync it i get the following message:
Could not connect to LiveReload server. Please make sure that LiveReload 2.3 (or later) or another compatible server is running.
Can anyone help me figure this out as i'd really like to get it working to streamline my workflow.
Thanks
DISCLAIMER: This answer is based on a setup I had working at some point, and by no means is a complete/working answer. But it should give you an alternative to the other tools that exist and get you half way there.
I have not used the tools you are mentioning, but since you are using gulp and aemsync, you could do the following:
In your gulp setup, create a websocket server and basically make that server publish messages everytime aemsync is triggered to push content to AEM.
// start a websocket server
const WebSocket = require('ws'); // requires "npm install ws"
const wss = new WebSocket.Server({ port: 8081 });
const connections = [];
wss.on('connection', function connection(ws) {
connections.push(ws); // keep track of all clients
// send any new messages that come to this server, to all connected clients
ws.on('message', (d) => connections.forEach(connection => connection.send(d)));
});
// create a new websocket to send messages to the websocket server above
const ws = new WebSocket('ws://localhost:8081');
// send a regex to the server every second
// NOTE: CHANGE this to run when aemsync is triggered in your build
setInterval( () => ws.send('reload'), 1000 );
Then in your JS code (on AEM) or really in a <script> tag that you make sure will NOT go beyond your local (or dev/prod) you can setup a websocket listener to refresh the page:
socket = new WebSocket('ws://localhost:8081');
socket.onopen = // add function for when ws is open
socket.onclose = // add function for when ws is closed
socket.onerror = // add function for when ws errors
// listen to messages and reload!
socket.addEventListener('message', function (event) {
location.reload();
});
Alternatively, you could use the chrome plugin I've developed:
https://github.com/ahmed-musallam/websocket-refresh-chrome-ext
It's not perfect by any means. However, for a basic setup, it should work great! an you don't need to touch your AEM JS.
I am developing a cross platform app with nativescript and firebase and I have some cloud functions triggered onCreate and onWrite but when the functions are "cold" (after a longer time of inactivity) I get this error most of the time and the function fails to execute properly. Following requests do work, though.
Error: Unexpected error while acquiring application default credentials: Could not load the default credentials. Browse to https://developers.google.com/accounts/docs/application-default-credentials for more information.
at GoogleAuth.<anonymous> (/user_code/node_modules/firebase-admin/node_modules/google-auth-library/build/src/auth/googleauth.js:229:31)
at step (/user_code/node_modules/firebase-admin/node_modules/google-auth-library/build/src/auth/googleauth.js:47:23)
at Object.next (/user_code/node_modules/firebase-admin/node_modules/google-auth-library/build/src/auth/googleauth.js:28:53)
at fulfilled (/user_code/node_modules/firebase-admin/node_modules/google-auth-library/build/src/auth/googleauth.js:19:58)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)
My first three lines of functions.js look like this (as in the documentation):
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
I've tried using a service_account file generated from the console (as described here) but then I get a different error which I guess is because I'm running it on Googles servers and am not hosting it myself.
Any idea why this is happening and how I can prevent this?
With the new firebase-functions 1.x SDK, you initialize the Admin SDK like this:
const admin = require('firebase-admin');
admin.initializeApp();
Be sure you're using the latest version of that module.
I am currently running awebapp with an embedded neo4j. Now I want to change to a standalone neo4j server using bolt. Neo4j has been loaded onto a standalone and port 7474 work as expected.
Using the following code works as expected:
var authority = neo4j.v1.auth.basic("neo4j", "XXXXXXXX");
_driver = neo4j.v1.driver("bolt://localhost ", authority, {encrypted:false});
However
var authority = neo4j.v1.auth.basic("neo4j", "XXXXXXXX");
_driver = neo4j.v1.driver("bolt://somesite.com/ ", authority, {encrypted:false});
Fails with:
neo4j-web.js:27568 WebSocket connection to 'ws://somesite.com:7687/' failed: Error during WebSocket handshake: net::ERR_CONNECTION_RESET
The port 7687 has been enabled. The neo4j version 3.0.4 and the server operating system is Centos 7.
What am I missing?
Thanks for the help
you need to enable remote connections by adding the following line to conf/neo4j.conf:
dbms.connector.bolt.address=0.0.0.0:7687
Stefan's answer works for Neo4j 3.0 (see this KB article).
For those that are having an issue like Maulik, you are probably using a more recent version of Neo4j (3.5, 4.x), in which case you need to use the following instead:
dbms.connector.bolt.advertised_address=localhost:7687
dbms.connector.bolt.listen_address=0.0.0.0:7687
I working on a socket.io + node project.
Just like in this page, http://davidwalsh.name/websocket
I am getting "info - unhandled socket.io url" error in socket.io v7. But I dont get this error with v6.17? Do you have any idea with this error?
Thanks
Had the exact issue couple of days back and looks like socket.io had some changes in the API.
I have a working demo of socket.io sending and receiving a message - uploaded to https://github.com/parj/node-websocket-demo as a reference
Essentially two changes
On Server side - changed socket.on to socket.sockets.on
var socket = io.listen(server);
socket.sockets.on('connection', function(client)
On Client side - URL and port not required as it is autodetected.
var socket = io.connect();
NOTE: you can also io.connect("http://<ip>:<port>") on the client side, however, not required anymore as it is autodetected
Here are the exact changes - https://github.com/parj/node-websocket-demo/commit/5ba52db9d1a5b7e8a3af5839adcd12768741dc97
This has been tested using Express 2.5.2 and Socket.io 0.8.7