My unit test launch looks like this. As you can see I have exploited CLI options to install a VSIX my CICD has already produced, and then also tried to install ms-vscode-remote.remote-ssh because I want to re-run the tests on a remote workspace.
import * as path from 'path';
import * as fs from 'fs';
import { runTests } from '#vscode/test-electron';
async function main() {
try {
// The folder containing the Extension Manifest package.json
// Passed to `--extensionDevelopmentPath`
const extensionDevelopmentPath = path.resolve(__dirname, '../../');
// The path to the extension test runner script
// Passed to --extensionTestsPath
const extensionTestsPath = path.resolve(__dirname, './suite/index');
const vsixName = fs.readdirSync(extensionDevelopmentPath)
.filter(p => path.extname(p) === ".vsix")
.sort((a, b) => a < b ? 1 : a > b ? -1 : 0)[0];
const launchArgsLocal = [
path.resolve(__dirname, '../../src/test/test-docs'),
"--install-extension",
vsixName,
"--install-extension",
"ms-vscode-remote.remote-ssh"
];
const SSH_HOST = process.argv[2];
const SSH_WORKSPACE = process.argv[3];
const launchArgsRemote = [
"--folder-uri",
`vscode-remote://ssh-remote+testuser#${SSH_HOST}${SSH_WORKSPACE}`
];
// Download VS Code, unzip it and run the integration test
await runTests({ extensionDevelopmentPath, extensionTestsPath, launchArgs: launchArgsLocal });
await runTests({ extensionDevelopmentPath, extensionTestsPath, launchArgs: launchArgsRemote });
} catch (err) {
console.error(err);
console.error('Failed to run tests');
process.exit(1);
}
}
main();
runTests downloads and installs VS Code, and passes through the parameters I supply. For the local file system all the tests pass, so the extension from the VSIX is definitely installed.
But ms-vscode-remote.remote-ssh doesn't seem to be installed - I get this error:
Cannot get canonical URI because no extension is installed to resolve ssh-remote
and then the tests fail because there's no open workspace.
This may be related to the fact that CLI installation of multiple extensions repeats the --install-extension switch. I suspect the switch name is used as a hash key.
What to do? Well, I'm not committed to any particular course of action, just platform independence. If I knew how to do a platform independent headless CLI installation of VS Code:latest in a GitHub Action, that would certainly do the trick. I could then directly use the CLI to install the extensions before the tests, and pass the installation path. Which would also require a unified way to get the path for vs code.
Update 2022-07-20
Having figured out how to do a platform independent headless CLI installation of VS Code:latest in a GitHub Action followed by installation of the required extensions I face new problems.
The test framework options include a path to an existing installation of VS Code. According to the interface documentation, supplying this should cause the test to use the existing installation instead of installing VS Code; this is why I thought the above installation would solve my problems.
However, the option seems to be ignored.
My latest iteration uses an extension dependency on remote-ssh to install it. There's a new problem: how to get the correct version of my extension onto the remote host. By default the remote host uses the marketplace version, which obviously won't be the version we're trying to test.
I would first try with only one --install-extension option, just to check if any extension is installed.
I would also check if the same set of commands works locally (install VSCode and its remote SSH extension)
Testing it locally (with only one extension) also allows to check if that extension has any dependencies (like Remote SSH - Editing)
Related
Problem Summary
Storybook snapshot test on static storybook returning blank screenshots even though they look fine on localhost:8080 when I ran npx http-server storybook-static
Tech stack and relevant code
Vue 3
Vite
Storybook
Jest
Storyshots
Puppeteer
I have components and their respective stories. npm run storybook works perfectly fine. My storybook.spec.js is as follows:
import { imageSnapshot } from "#storybook/addon-storyshots-puppeteer"
import initStoryshots from "#storybook/addon-storyshots"
initStoryshots({
suite: "Image storyshots",
test: imageSnapshot(
storybookUrl: 'file://absolute/path/to/my/storybook-static'
)
})
I ran the following. fyi, I did not modify any file in storybook-static after running npm run build-storybook.
npm run build-storybook
npm run test
npm run test constitutes jest --config=jest.config.js test
Problem
Unfortunately, the screenshots I get are all blank and fail the snapshot test.
I suspect it might be due to a CORS error just like other Storybook users when they click <project-root>/storybook-static/index.html after running npm run build-storybook, to which I want to ask for a solution as well, because I wanna run test remotely on a headless server.
Note
I used absolute path because relative path caused a resource not found error during the testing process.
The problem is that you're running the tests from file:// instead of http://. So the URI is file:// and the img url ends up like this after applying some url logic: path.resolve(window.location, '/your-image.png') file:///your-image.png.
If this is the case you could change to http://. You can start a express server and serve the storybook-static folder from setupGlobal and then shut it down in teardownGlobal. Then you will need to change your storybookUrl to http://localhost:<some-port>.
None of the images were loading within my pipeline but worked fine locally, ended up being because the components were fetching images using a relative path <img src="/my-image" /> which apparently is not allowed using the file protocol.
I ended up doing 2 things:
Updating the static dirs directory to use the root by updating the main.js file in storybook
module.exports = {
staticDirs: [{ from: '../static', to: '/' }],
}
Added a script to remove the leading slash of images in the preview-head.html file from storybook
<script>
document.addEventListener('DOMContentLoaded', () => {
Array.from(document.querySelectorAll('img')).forEach((img) => {
const original = img.getAttribute('src');
img.setAttribute('src', original.replace('/', ''));
});
});
</script>
Another (arguably better) approach would be to run the tests through a server where you can access the images
This question already has answers here:
Error message "error:0308010C:digital envelope routines::unsupported"
(50 answers)
Closed 12 months ago.
I'm having an issue with a Webpack build process that suddenly broke, resulting in the following error...
<s> [webpack.Progress] 10% building 0/1 entries 0/0 dependencies 0/0 modules
node:internal/crypto/hash:67
this[kHandle] = new _Hash(algorithm, xofLen);
^
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:130:10)
at BulkUpdateDecorator.hashFactory (/app/node_modules/webpack/lib/util/createHash.js:155:18)
at BulkUpdateDecorator.update (/app/node_modules/webpack/lib/util/createHash.js:46:50)
at OriginalSource.updateHash (/app/node_modules/webpack-sources/lib/OriginalSource.js:131:8)
at NormalModule._initBuildHash (/app/node_modules/webpack/lib/NormalModule.js:888:17)
at handleParseResult (/app/node_modules/webpack/lib/NormalModule.js:954:10)
at /app/node_modules/webpack/lib/NormalModule.js:1048:4
at processResult (/app/node_modules/webpack/lib/NormalModule.js:763:11)
at /app/node_modules/webpack/lib/NormalModule.js:827:5 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
command terminated with exit code 1
I've tried googling ERR_OSSL_EVP_UNSUPPORTED webpack which yielded almost no useful results, but it did highlight an issue using MD4 as provided by OpenSSL (which is apparently deprecated?) to generate hashes.
The webpack.config.js code is as follows:
const path = require('path');
const webpack = require('webpack');
/*
* SplitChunksPlugin is enabled by default and replaced
* deprecated CommonsChunkPlugin. It automatically identifies modules which
* should be splitted of chunk by heuristics using module duplication count and
* module category (i. e. node_modules). And splits the chunks…
*
* It is safe to remove "splitChunks" from the generated configuration
* and was added as an educational example.
*
* https://webpack.js.org/plugins/split-chunks-plugin/
*
*/
/*
* We've enabled TerserPlugin for you! This minifies your app
* in order to load faster and run less javascript.
*
* https://github.com/webpack-contrib/terser-webpack-plugin
*
*/
const TerserPlugin = require('terser-webpack-plugin');
module.exports = {
mode: 'development',
entry: './src/js/scripts.js',
output: {
path: path.resolve(__dirname, 'js'),
filename: 'scripts.js'
},
devtool: 'source-map',
plugins: [new webpack.ProgressPlugin()],
module: {
rules: []
},
optimization: {
minimizer: [new TerserPlugin()],
splitChunks: {
cacheGroups: {
vendors: {
priority: -10,
test: /[\\/]node_modules[\\/]/
}
},
chunks: 'async',
minChunks: 1,
minSize: 30000,
name: 'true'
}
}
};
How do I change the hashing algorithm used by Webpack to something else?
I was able to fix it via:
export NODE_OPTIONS=--openssl-legacy-provider
sachaw's comment to Node.js v17.0.0 - Error starting project in development mode #30078
But they say they fixed it: ijjk's comment to Node.js v17.0.0 - Error starting project in development mode #30078:
Hi, this has been updated in v11.1.3-canary.89 of Next.js, please update and give it a try!
For me, it worked only with the annotation above.
I also want to point out that npm run start works with -openssl-legacy-provider, but npm run dev won't.
It seems that there is a patch:
Node.js 17: digital envelope routines::unsupported #14532
I personally downgraded to 16-alpine.
I had this problem too. I'd accidentally been running on the latest Node.js (17.0 at time of writing), not the LTS version (14.18) which I'd meant to install. Downgrading my Node.js install to the LTS version fixed the problem for me.
There is a hashing algorithm that comes with Webpack v5.54.0+ that does not rely on OpenSSL.
To use this hash function that relies on a npm-provided dependency instead of an operating system-provided dependency, modify the webpack.config.cjs output key to include the hashFunction: "xxhash64" option.
module.exports = {
output: {
hashFunction: "xxhash64"
}
};
Ryan Brownell's answer is the ideal solution if you are using Webpack v5.54.0+.
If you're using an older version of Webpack, you can still solve this by changing the hash function to one that is not deprecated. (It defaults to the ancient md4, which OpenSSL has removed support for, which is the root cause of the error.) The supported algorithms are any supported by crypto.createHash. For example, to use SHA-256:
module.exports = {
output: {
hashFunction: "sha256"
}
};
Finally, if you are unable to change the Webpack configuration (e.g., if it's a transitive dependency which is running Webpack), you can enable OpenSSL's legacy provider to temporarily enable MD4 during the Webpack build. This is a last resort. Create a file openssl.cnf with this content…
openssl_conf = openssl_init
[openssl_init]
providers = provider_sect
[provider_sect]
default = default_sect
legacy = legacy_sect
[default_sect]
activate = 1
[legacy_sect]
activate = 1
…and then set the environment variable OPENSSL_CONF to the path to that file when running Webpack.
It is not my answer really, but I found this workaround /hack/ to fix my problem Code Check in for a GitHub project... see the bug comments here.
I ran into ERR_OSSL_EVP_UNSUPPORTED after updating with npm install.
I added the following to node_modules\react-scripts\config\webpack.config.js
const crypto = require("crypto");
const crypto_orig_createHash = crypto.createHash;
crypto.createHash = algorithm => crypto_orig_createHash(algorithm == "md4" ? "sha256" : algorithm);
I tried Ryan Brownell's solution and ended up with a different error, but this worked...
This error is mentioned in the release notes for Node.js 17.0.0, with a suggested workaround:
If you hit an ERR_OSSL_EVP_UNSUPPORTED error in your application with Node.js 17, it’s likely that your application or a module you’re using is attempting to use an algorithm or key size which is no longer allowed by default with OpenSSL 3.0. A command-line option, --openssl-legacy-provider, has been added to revert to the legacy provider as a temporary workaround for these tightened restrictions.
I ran into this issue using Laravel Mix (Webpack) and was able to fix it within file package.json by adding in the NODE_OPTIONS=--openssl-legacy-provider (referenced in Jan's answer) to the beginning of the script:
package.json:
{
"private": true,
"scripts": {
"production": "cross-env NODE_ENV=production NODE_OPTIONS=--openssl-legacy-provider node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js"
},
"dependencies": {
...
}
}
Try upgrading your Webpack version to 5.62.2.
I faced the same challenge, but you just need to downgrade Node.js to version 16.13 and everything works well. Download LTS, not the current on Downloads.
I had the same problem with my Vue.js project and I solved it.
macOS and Linux
You should have installed NVM (Node Version Manager). If you never had before, just run this command in your terminal:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
Open your project
Open the terminal in your project
Run the command nvm install 16.13.0 or any older version
After the installation is completed, run nvm use 16.13.0
I faced the same problem in a project I developed with Next.js. For the solution, I ran the project as follows and I solved the problem.
cross-env NODE_OPTIONS='--openssl-legacy-provider' next dev
This means that you have the latest Node.js version. If you are using it for Docker then you need to change the image from
FROM node
to
FROM node:14
I am developing web crawler which could render Javascript websites and so I decided to use PupeeteerSharp, a .NET port of popular Node.JS headless Chrome browser Pupeeteer API. I am running Service Fabric's local development cluster on Windows 10 development machine and have one stateless service in my solution.
I've created Data folder under Service project's PackageRoot folder and put .local-chromium folder contents there (contains chrome.exe executable) so it deploys as independent data package of service.
I've also placed this XML config line in ServiceManifest.xml file:
<DataPackage Name="Data" Version="1.0.0" />
So far it looks good and headless browser content is copied to SFCluster Data package directory properly.
Then in my Stateless Service code I try to call Pupeeteer chromium executable as follows:
var browser = await Puppeteer.LaunchAsync(new LaunchOptions
{
Headless = true,
ExecutablePath = _chromiumPath // #$"{context.CodePackageActivationContext.GetDataPackageObject("Data").Path}\.local-chromium\Win64-706915\chrome-win\chrome.exe"
});
using (var page = (await browser.NewPageAsync()))
{
Response renderResponse;
try
{
renderResponse = await page.GoToAsync(webPage.AbsoluteUri, timeout);
if (renderResponse.Status != System.Net.HttpStatusCode.OK)
{
return new RenderResult(RenderStatus.OtherFailure);
}
// other code
}
catch (TimeoutException)
{
return new RenderResult(RenderStatus.Timeouted);
}
In this line: using (var page = (await browser.NewPageAsync())) my code (Thread) simply hangs without returning, in Debug console I see many thread exits, but no exception occurs. I was previously getting System.IO.FileNotFoundException when I was fixing some other errors regarding appropriate copying of chromium folder contents, but now these errors are gone so it seems that code find .exe but somehow cannot start headless mode of PupeeterSharp.
Does that mean that I cannot simply run external .exe chromium binary with Service Fabric's Native Application Model? Should I use Docker and Linux containers instead?
I am using the gulp-aemsync plugin to sync my css and js changes to a clientlib on an AEM instance. A have a gulp task watching the js and css that runs gulp-aemsync fine (changes are on the site when i refresh), but being a bit lazy as i am it would be nice to get live reload working so that i never have to manually refresh the page while working.
I have tried to follow both these 2 online guides:
https://adobe-consulting-services.github.io/acs-aem-tools/features/live-reload/index.html
https://www.cognifide.com/our-blogs/cq/up-and-running-with-livereload-in-adobe-aem6
Followed the steps of:
installing Netty package on AEM instance
installing ACS AEM tools package on the AEM instance
installing the RemoteLiveReload chrome extension (the AEM instance is hosted on AWS)
That didn't work, so i got one of our DevOps engineers to open port 35729 (which is the default for Livereload) on the AEM instance. That still doesn't work, and when i click the chrome browser extension to sync it i get the following message:
Could not connect to LiveReload server. Please make sure that LiveReload 2.3 (or later) or another compatible server is running.
Can anyone help me figure this out as i'd really like to get it working to streamline my workflow.
Thanks
DISCLAIMER: This answer is based on a setup I had working at some point, and by no means is a complete/working answer. But it should give you an alternative to the other tools that exist and get you half way there.
I have not used the tools you are mentioning, but since you are using gulp and aemsync, you could do the following:
In your gulp setup, create a websocket server and basically make that server publish messages everytime aemsync is triggered to push content to AEM.
// start a websocket server
const WebSocket = require('ws'); // requires "npm install ws"
const wss = new WebSocket.Server({ port: 8081 });
const connections = [];
wss.on('connection', function connection(ws) {
connections.push(ws); // keep track of all clients
// send any new messages that come to this server, to all connected clients
ws.on('message', (d) => connections.forEach(connection => connection.send(d)));
});
// create a new websocket to send messages to the websocket server above
const ws = new WebSocket('ws://localhost:8081');
// send a regex to the server every second
// NOTE: CHANGE this to run when aemsync is triggered in your build
setInterval( () => ws.send('reload'), 1000 );
Then in your JS code (on AEM) or really in a <script> tag that you make sure will NOT go beyond your local (or dev/prod) you can setup a websocket listener to refresh the page:
socket = new WebSocket('ws://localhost:8081');
socket.onopen = // add function for when ws is open
socket.onclose = // add function for when ws is closed
socket.onerror = // add function for when ws errors
// listen to messages and reload!
socket.addEventListener('message', function (event) {
location.reload();
});
Alternatively, you could use the chrome plugin I've developed:
https://github.com/ahmed-musallam/websocket-refresh-chrome-ext
It's not perfect by any means. However, for a basic setup, it should work great! an you don't need to touch your AEM JS.
I tried to set-up a elasticsearch on my Windows 7 OS PC. Installed elasticsearch and curl and it's working as the loacahost:9200 is working fine.
Now I am strugging to search in a file located at c:\user\rajesh\raj.txt.
My doubt is, Where do mention that I have tos search in this file? elasticsearch.yml? Which parameter I need to set to point this text file?
Indexing is working with curl but mapping gives nullpointer exception? Do I need to install something else?
I tried to install sense plugin for chrome but says moved to marvel, and from there unable to install marvel!
From what I can tell, you've installed Elasticsearch and you're now expecting to be able to search within files on your local file system. This isn't how ES works. You need to create a mapping for an index and then populate that index with the content you want to search in. If you're looking to index files on your local file system rather than data you have pulled from a database you should look in to the File system River Plugin for Elasticsearch, http://www.pilato.fr/fsriver/. This deals with all of the indexing of file system based documents automatically, once you've got it set up correctly.
EDIT:
I also see you're trying to set up Kibana and Marvel/Sense. To set up Kibana just follow the instructions here: http://www.elasticsearch.org/overview/kibana/installation/
To set up Marvel open powershell, CD to C:\elasticsearch\bin then run plugin.bat -i elasticsearch/marvel/latest then you'll need to restart your cluster. Once you've done that if you go to http://localhost:9200/_plugin/marvel/ you'll see your marvel dashboard. You'll also see a tab for "Sense" which is the other plugin you referred to.
If you are using elastic search for retrieving data from any DB like PostgreSQL, then go to folder bin/rivers.bat and edit as
curl -XPUT localhost:9200/_river/actor_jdbc_river/_meta -d "{\"type\":\"jdbc\",\"jdbc\":{\"strategy\":\"simple\",\"poll\":\"1h\",\"driver\":\"org.postgresql.Driver\",\"url\":\"jdbc:postgresql://10.5.2.132:5432/prodDB\",\"user\":\"UserName\",\"password\":\"Password\",\"sql\":\"select t.id as _id,t.name from topic as t \",\"digesting\" : true},\"index\":{\"index\":\"jdbc\",\"type\":\"actor_jdbc_river1\"}}"
Then create a client in Java side to access data in river.
Here cluster name is same as that mention in folder config/elasticsearch.yml (testDBsearch)
private static Client createClient() {
//Create Client
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "testDBsearch").build();
TransportClient transportClient = new TransportClient(settings);
transportClient = transportClient.addTransportAddress(new InetSocketTransportAddress("10.5.2.132", 9300));
return (Client) transportClient;
}
public static void main(String[] args) {
Client client = createClient();
String queryString = "python";
search(client, 100, queryString);
}
public static void search(Client client,int size, String queryString) {
queryString=queryString +"*";
try{
SearchResponse responseActor;
responseActor = client.prepareSearch("jdbc").setTypes("actor_jdbc_river1").setSearchType(SearchType.DEFAULT)
.setQuery(QueryBuilders.queryString(queryString)
.field("designation",new Float(2.0)).field("name", new Float(5.0)).field("email") .defaultOperator(Operator.OR)).setFrom(0).setSize(size).setExplain(true).execute().actionGet();
for(SearchHit hit:responseActor.getHits()) {
System.out.println(hit.getSourceAsString());
System.out.println(hit.getScore());
System.out.println("---------------------------");
}
}catch(Exception e){
System.out.println("Error in elastic search "+queryString+" Error :"+e);
}
}
clear installation of elasticsearch in windows:
1) check whether your system has latest java version
2) download and extract elasticsearch from "download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/zip/elasticsearch/2.3.3/elasticsearch-2.3.3.zip"
3) set JAVA_HOME environment variable "C:\Program Files (x86)\Java\jdk1.8.0_91"
4) check JAVA_HOME environment variable using command "service" in bin directry of elasticsearch shown in below figure checking whether JAVA_HOME is set properly or not
5) install service.bat using command service.bat install
6) uncomment network.host and give value as localhost in configuration file of elasticsearch
network.host= localhost in elasticsearch.yml (config file)
7)run elasticsearch "C:\elasticsearch-2.3.3\bin\elasticsearch"
if you get error while running elastic search saying update JVM to latest version means java in your system is not containing latest version (install and run latest java version)
8)install elasticsearch-head plugin to visualize things in elasticsearch
run command "plugin install elasticsearch-head"
if its failed to install elasticsearch-head then use command-
plugin install "github.com/mobz/elasticsearch-head/archive/master.zip"
9)open elasticsearch in browser using link "localhost:9200/_plugin/head/"
elasticsearch visual interface