How to get AEM 6.3 to Livereload clientlibs - gulp

I am using the gulp-aemsync plugin to sync my css and js changes to a clientlib on an AEM instance. A have a gulp task watching the js and css that runs gulp-aemsync fine (changes are on the site when i refresh), but being a bit lazy as i am it would be nice to get live reload working so that i never have to manually refresh the page while working.
I have tried to follow both these 2 online guides:
https://adobe-consulting-services.github.io/acs-aem-tools/features/live-reload/index.html
https://www.cognifide.com/our-blogs/cq/up-and-running-with-livereload-in-adobe-aem6
Followed the steps of:
installing Netty package on AEM instance
installing ACS AEM tools package on the AEM instance
installing the RemoteLiveReload chrome extension (the AEM instance is hosted on AWS)
That didn't work, so i got one of our DevOps engineers to open port 35729 (which is the default for Livereload) on the AEM instance. That still doesn't work, and when i click the chrome browser extension to sync it i get the following message:
Could not connect to LiveReload server. Please make sure that LiveReload 2.3 (or later) or another compatible server is running.
Can anyone help me figure this out as i'd really like to get it working to streamline my workflow.
Thanks

DISCLAIMER: This answer is based on a setup I had working at some point, and by no means is a complete/working answer. But it should give you an alternative to the other tools that exist and get you half way there.
I have not used the tools you are mentioning, but since you are using gulp and aemsync, you could do the following:
In your gulp setup, create a websocket server and basically make that server publish messages everytime aemsync is triggered to push content to AEM.
// start a websocket server
const WebSocket = require('ws'); // requires "npm install ws"
const wss = new WebSocket.Server({ port: 8081 });
const connections = [];
wss.on('connection', function connection(ws) {
connections.push(ws); // keep track of all clients
// send any new messages that come to this server, to all connected clients
ws.on('message', (d) => connections.forEach(connection => connection.send(d)));
});
// create a new websocket to send messages to the websocket server above
const ws = new WebSocket('ws://localhost:8081');
// send a regex to the server every second
// NOTE: CHANGE this to run when aemsync is triggered in your build
setInterval( () => ws.send('reload'), 1000 );
Then in your JS code (on AEM) or really in a <script> tag that you make sure will NOT go beyond your local (or dev/prod) you can setup a websocket listener to refresh the page:
socket = new WebSocket('ws://localhost:8081');
socket.onopen = // add function for when ws is open
socket.onclose = // add function for when ws is closed
socket.onerror = // add function for when ws errors
// listen to messages and reload!
socket.addEventListener('message', function (event) {
location.reload();
});
Alternatively, you could use the chrome plugin I've developed:
https://github.com/ahmed-musallam/websocket-refresh-chrome-ext
It's not perfect by any means. However, for a basic setup, it should work great! an you don't need to touch your AEM JS.

Related

Install multiple vs code extensions in CICD

My unit test launch looks like this. As you can see I have exploited CLI options to install a VSIX my CICD has already produced, and then also tried to install ms-vscode-remote.remote-ssh because I want to re-run the tests on a remote workspace.
import * as path from 'path';
import * as fs from 'fs';
import { runTests } from '#vscode/test-electron';
async function main() {
try {
// The folder containing the Extension Manifest package.json
// Passed to `--extensionDevelopmentPath`
const extensionDevelopmentPath = path.resolve(__dirname, '../../');
// The path to the extension test runner script
// Passed to --extensionTestsPath
const extensionTestsPath = path.resolve(__dirname, './suite/index');
const vsixName = fs.readdirSync(extensionDevelopmentPath)
.filter(p => path.extname(p) === ".vsix")
.sort((a, b) => a < b ? 1 : a > b ? -1 : 0)[0];
const launchArgsLocal = [
path.resolve(__dirname, '../../src/test/test-docs'),
"--install-extension",
vsixName,
"--install-extension",
"ms-vscode-remote.remote-ssh"
];
const SSH_HOST = process.argv[2];
const SSH_WORKSPACE = process.argv[3];
const launchArgsRemote = [
"--folder-uri",
`vscode-remote://ssh-remote+testuser#${SSH_HOST}${SSH_WORKSPACE}`
];
// Download VS Code, unzip it and run the integration test
await runTests({ extensionDevelopmentPath, extensionTestsPath, launchArgs: launchArgsLocal });
await runTests({ extensionDevelopmentPath, extensionTestsPath, launchArgs: launchArgsRemote });
} catch (err) {
console.error(err);
console.error('Failed to run tests');
process.exit(1);
}
}
main();
runTests downloads and installs VS Code, and passes through the parameters I supply. For the local file system all the tests pass, so the extension from the VSIX is definitely installed.
But ms-vscode-remote.remote-ssh doesn't seem to be installed - I get this error:
Cannot get canonical URI because no extension is installed to resolve ssh-remote
and then the tests fail because there's no open workspace.
This may be related to the fact that CLI installation of multiple extensions repeats the --install-extension switch. I suspect the switch name is used as a hash key.
What to do? Well, I'm not committed to any particular course of action, just platform independence. If I knew how to do a platform independent headless CLI installation of VS Code:latest in a GitHub Action, that would certainly do the trick. I could then directly use the CLI to install the extensions before the tests, and pass the installation path. Which would also require a unified way to get the path for vs code.
Update 2022-07-20
Having figured out how to do a platform independent headless CLI installation of VS Code:latest in a GitHub Action followed by installation of the required extensions I face new problems.
The test framework options include a path to an existing installation of VS Code. According to the interface documentation, supplying this should cause the test to use the existing installation instead of installing VS Code; this is why I thought the above installation would solve my problems.
However, the option seems to be ignored.
My latest iteration uses an extension dependency on remote-ssh to install it. There's a new problem: how to get the correct version of my extension onto the remote host. By default the remote host uses the marketplace version, which obviously won't be the version we're trying to test.
I would first try with only one --install-extension option, just to check if any extension is installed.
I would also check if the same set of commands works locally (install VSCode and its remote SSH extension)
Testing it locally (with only one extension) also allows to check if that extension has any dependencies (like Remote SSH - Editing)

PupeeteerSharp Does Not Work in ServiceFabric Stateless Service

I am developing web crawler which could render Javascript websites and so I decided to use PupeeteerSharp, a .NET port of popular Node.JS headless Chrome browser Pupeeteer API. I am running Service Fabric's local development cluster on Windows 10 development machine and have one stateless service in my solution.
I've created Data folder under Service project's PackageRoot folder and put .local-chromium folder contents there (contains chrome.exe executable) so it deploys as independent data package of service.
I've also placed this XML config line in ServiceManifest.xml file:
<DataPackage Name="Data" Version="1.0.0" />
So far it looks good and headless browser content is copied to SFCluster Data package directory properly.
Then in my Stateless Service code I try to call Pupeeteer chromium executable as follows:
var browser = await Puppeteer.LaunchAsync(new LaunchOptions
{
Headless = true,
ExecutablePath = _chromiumPath // #$"{context.CodePackageActivationContext.GetDataPackageObject("Data").Path}\.local-chromium\Win64-706915\chrome-win\chrome.exe"
});
using (var page = (await browser.NewPageAsync()))
{
Response renderResponse;
try
{
renderResponse = await page.GoToAsync(webPage.AbsoluteUri, timeout);
if (renderResponse.Status != System.Net.HttpStatusCode.OK)
{
return new RenderResult(RenderStatus.OtherFailure);
}
// other code
}
catch (TimeoutException)
{
return new RenderResult(RenderStatus.Timeouted);
}
In this line: using (var page = (await browser.NewPageAsync())) my code (Thread) simply hangs without returning, in Debug console I see many thread exits, but no exception occurs. I was previously getting System.IO.FileNotFoundException when I was fixing some other errors regarding appropriate copying of chromium folder contents, but now these errors are gone so it seems that code find .exe but somehow cannot start headless mode of PupeeterSharp.
Does that mean that I cannot simply run external .exe chromium binary with Service Fabric's Native Application Model? Should I use Docker and Linux containers instead?

How can I tell why my node app is crashing on Heroku? (HTML, Stripe, Heroku, NodeJS)

So, I've developed a website (HTML) that has an embedded payment form from Stripe called Checkout. When you visit the website, it prompts you to enter your credit card information, so the checkout form is working correctly.
The issue I'm having is processing the token once it's created.
I'm extremely new to web development and I've never written server code before so please, bear with me.
I've been following guides (Process payments with Node, Vue, Stripe & How to set up Stripe payments with Node.js) and stripes documentation on tokenization to create charges using server-side code (Stripe Checkout)
I understand that I have to have Heroku set up to process the charges so I created an account and set up an app from my terminal. I made a new directory that has the modules required (stripe, express, and bodyParser) and I have this code in my server.js file:
It deploys to Heroku successfully but crashes. This is what is being returned in the console:
What am I doing wrong? Any assistance would be a great help.
You are missing a vital piece:
// Start the server
app.listen(port, function(){
console.log('Server listening on port ' + port)
});
You don't seem to start the server in your application. This should be in the bottom of server.js. You also have to remember to set the port:
var port = process.env.PORT || 3000;
It goes above app.listen of course.
I can't tell for sure if that will fix all your errors, but you have to start with starting the server first.
Also, remember to check for errors in callbacks. In the callback for create you are not doing that. E.g.
if (err){
console.error(err);
res.json({ error: err, charge: false });
} else {
// send response with charge data
res.json({ error: false, charge: charge });
}
You are doing res.send() whether or not there are errors. I doubt that this has anything to do with the Heroku error though.

Websocket not connecting in Karma

I am trying to create some karma tests and some of the functions I'm testing are supposed to make Websocket connections.
When running the code normally outside of Karma (using Chrome or Firefox), I see that my console log messages show that ws.onopen() fires as expected.
Not so when running in Karma using PhantomJS or even Chrome. I'm running version 0.12 of Karma. Could this be related to how Karma uses socket.io, instead of the browser's Websocket?
I am not doing anything in regards to manually writing a handshake.
webSocketConnect = function() {
ws = new WebSocket("ws://192.168.103.83:9000", "ws-xyz");
ws.onopen = function(evt) {
console.log("Connection is opened...");
};
};
When webSocketConnect is invoked, i.e.:
webSocketConnect();
I get no error but I don't see "Connection is opened..." in the console log.
Thanks for any help.
Edit: I think that the code is not getting executed because XSS protections are coming into play. I took the code above and started playing around with it in jsfiddle and saw that when I ran it I would get: "
SecurityError: The operation is insecure." I think it's a shame that Karma is not reporting any error. In fact Karma is completely silent and I wasn't even noticing that none of the websocket code was being executed at all.

UrlFetchApp unable to access localhost resource

This works fine:
var resp = UrlFetchApp.fetch("someremotehost/SomeFile.csv");
resp.getResponseCode(); //returns 200
resp.getContentText(); //returns the data
however, on my local machine, I'm running xampp with SomeFile.csv is located in htdocs/dev but I cannot get it to work on localhost:
var resp = UrlFetchApp.fetch("localhost/dev/SomeFile.csv");
resp.getResponseCode(); //returns 0.0
resp.getContentText() //returns nothing!
I checked with chrome-extension postman and http://localhost/dev/SomeFile.csv works fine, so why does UrlFetchApp.fetch("http://localhost/dev/SomeFile.csv") not work?
That wont work because apps script executes code server side (in google servers). The only way to do this is to make an htmlService app and use ajax from the frontend.
Use Proxy. I had the same issue, where my app was on AWS ec2.
For that, I used nginx as proxy server and it worked.