Can someone give me a step by step! guide to deploying all the work at https://github.com/Autodesk-Forge/forge-rcdb.nodejs - to Heroku or Digital Ocean? I'm fine with either, but I'd like a proper guide here for anyone else that tries to go through this.
Explanation:
Following the guide here # Building Autodesk Forge RCDB on Windows 10 fails with node-gyp errors - I created my own DB on my localhost. I had no choice but to change the dynamic clientsecret and clientid in development.config.js to a static option - using the ones in my own forge api get it working.
Issues:
https://devcenter.heroku.com/articles/nodejs-support#customizing-the-build-process
Log In: I get the following error if I click on login to my forge account from the website (LINK)
I've moved all of my files to heroku, hosted my database (though have not even gotten to the point of testing that far). When I try to build on heroku I get the following error.
-----> Node.js app detected
-----> Build failed
Two different lockfiles found: package-lock.json and yarn.lock
Both npm and yarn have created lockfiles for this application,
but only one can be used to install dependencies. Installing
dependencies using the wrong package manager can result in missing
packages or subtle bugs in production.
- To use npm to install your application's dependencies please delete
the yarn.lock file.
$ git rm yarn.lock
- To use yarn to install your application's dependences please delete
the package-lock.json file.
$ git rm package-lock.json
https://kb.heroku.com/why-is-my-node-js-build-failing-because-of-conflicting-lock-files
Push rejected, failed to compile Node.js app.
Push failed
Log In: I get the following error if I click on login to my forge account from the website (LINK)
You need to configure the following environment variables on Heroku (the lack of FORGE_CLIENT_ID as the environment variable was what caused the error):
"NODE_ENV": {
"description": "Environment, defaulted to production",
"value": "production"
},
"NPM_CONFIG_PRODUCTION": {
"description": "This forces Heroku to install devDependencies, needed to build the App. Must be false!",
"value": "false"
},
"FORGE_CLIENT_ID": {
"description": "Your Forge Client ID API Key"
},
"FORGE_CLIENT_SECRET": {
"description": "Your Forge Client Secret API Key"
},
"RCDB_DBHOST": {
"description": "Database host url"
},
"RCDB_PORT": {
"description": "Database port"
},
"RCDB_DBNAME": {
"description": "Database name"
},
"RCDB_USER": {
"description": "Database username"
},
"RCDB_PASS": {
"description": "Database user password"
}
This should have been easier with the Deploy to Heroku button in the project README but it's not set up right unfortunately.
I've moved all of my files to heroku, hosted my database (though have not even gotten to the point of testing that far). When I try to build on heroku I get the following error.
As is suggested in the error messages only one package manager should be present so either delete the yarn.lock or package-lock.json file in the root directory of the project and deploy again.
Related
I am trying to configure Cassandra with Drill. I used the same approach given on the link: https://drill.apache.org/docs/starting-the-web-ui/.
I used the following code for New Storage Plugin:
{
"type": "cassandra",
"hosts": [
"127.0.0.1"
],
"port": 9042,
"username": "<username>",
"password": "<password>",
"enabled": false
}
I have attached the Screenshot here.
But I'm getting the following error:
Please retry: Error (invalid JSON mapping)
How can I resolve this?
All the code :
Git: https://github.com/yssharma/drill/tree/cassandra-storage
Patch: https://gist.github.com/yssharma/2581ae8a97c559b2677f
1. Get Drill: Lets get the Drill source
$ git clone https://github.com/apache/drill.git
2. Get Cassandra Storage patch/Download the Patch file from:
https://reviews.apache.org/r/29816/diff/raw/
3. Apply the patch on top of Drill
$ cd drill
$ git apply --check ~/Downloads/DRILL-92-CassandraStorage.patch
$ git apply ~/Downloads/DRILL-92-CassandraStorage.patch
4. Build Drill with Cassandra Storage & export distribution to /opt/drill
$ mvn clean install -DskipTests
$ mkdir /opt/drill
$ tar xvzf distribution/target/*.tar.gz --strip=1 -C /opt/drill
5. Start Sqlline.
That it we have finished with the Drill build and installation – and its time we can start using Drill.
$ cd /opt/drill
$ bin/sqlline -u jdbc:drill:zk=local -n admin -p admin
Drill-Sqlline
Hit ‘show schemas‘ to view existing schemas.
Drill-Sqlline-schemas
6. Drill Web interface
You should be able to see the Drill web interface on localhost:8047, or whatever your host/port is.
Use this as your config:
{
"type": "cassandra",
"config": {
"cassandra.hosts": [
"127.0.0.1",
"127.0.0.2"
],
"cassandra.port": 9042
},
"enabled": true
}
Also, if this doesnt work, know that they are working on a plugin for it now: https://github.com/apache/drill/pull/1960
I'll give an update here as well. We're doing some serious refactoring of the how Drill works with storage plugins. Specifically, we're working to incorporate the Calcite Adapter1 for Cassandra. The reason for this is that the hard part of storage plugins isn't the connection, it's the optimizations. Calcite already does query planning for Drill and already implemented a bunch of these adapters which means that the work of figuring out all the optimizations (AKA pushdowns) is largely done.
In the case of Cassandra/Scylla, this is particularly important because some filters should be pushed down to Cassandra, and some should absolutely not be pushed down. The adapters also include aggregate pushdowns--something which no Drill plugins currently do. Again the point of this is that once we commit this, the connector should work VERY will with Cassandra/Scylla. We have one for ElasticSearch that is very near completion and once that's done the Cassandra plugin is next. If you have any suggestions/comments or other feedback, please post on the pull request linked above.
** UPDATE 11 April 2021: Cassandra/Scylla Plugin Now Merged in Drill 1.19.0-SNAPSHOT **
Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network
I'm trying to make a little app which updates redmine issues. To start with I wanted to test the API calls to make sure I knew what I'm doing and already hit a wall.
I fired up postman with a PUT
URL
http://address:port/issues/1.json
headers:
Content-Type:application/json
X-Redmine-API-Key:MYKEY
X-Redmine-Switch-User:MYUSERNAME
body:
{"issue": { "id":"5729", "subject": "This change happens", "status": { "id": "1", "name": "This change is ignored" } } }
However when I hit send and look in redmine only the subject has been updated, the status doesn't change. I can see also that the last updated field updates to the current time/date.
I've seen several answers to questions like this already, but the solution always seems to be adding the content type to the header... and I've already got that.
Am I missing something obvious?
Here is my redmine environment if relevant:
Environment:
Redmine version 2.5.1.stable
Ruby version 1.9.3-p0 (2011-10-30) [i386-mingw32]
Rails version 3.2.17
Environment production
Database adapter PostgreSQL
SCM:
Subversion 1.8.13
Mercurial 3.4
Git 1.9.5
Filesystem
Redmine plugins:
clipboard_image_paste 1.8
redmine_backlogs v1.0.6
redmine_ckeditor 1.0.16
redmine_dashboard 3.0.0.dev0
redmine_issue_checklist 2.0.5
redmine_questions 0.0.5
redmine_release_notes 1.3.1
redmine_repobrowser 1.3.0
redmine_user_specific_theme 0.0.1
redmine_wiki_extensions 0.6.3
redmine_wiki_lists 0.0.3
According to http://www.redmine.org/projects/redmine/wiki/Rest_Issues#Updating-an-issue, it looks like you should only pass in the status id, as "status_id". Could you try something like this?
{"issue":
{
"id":"5729",
"subject": "This change happens",
"status_id": "1"
}
}
I am trying to use the OpenStack provisioner API in packer to clone an instance. So far I have developed the script:
{
"variables": {
},
"description": "This will create the baked vm images for any environment from dev to prod.",
"builders": [
{
"type": "openstack",
"identity_endpoint": "http://192.168.10.10:5000/v3",
"tenant_name": "admin",
"domain_name": "Default",
"username": "admin",
"password": "****************",
"region": "RegionOne",
"image_name": "cirros",
"flavor": "m1.tiny",
"insecure": "true",
"source_image": "0f9b69ee-4e9f-4807-a7c4-6a58355c37b1",
"communicator": "ssh",
"ssh_keypair_name": "******************",
"ssh_private_key_file": "~/.ssh/id_rsa",
"ssh_username": "root"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 60"
]
}
]
}
But upon running the script using packer build script.json I get the following error:
User:packer User$ packer build script.json
openstack output will be in this color.
1 error(s) occurred:
* ssh_private_key_file is invalid: stat ~/.ssh/id_rsa: no such file or directory
My id_rsa is a file starting and ending with:
------BEGIN RSA PRIVATE KEY------
key
------END RSA PRIVATE KEY--------
Which I thought meant it was a PEM related file so I found this was weird so I made a pastebin of my PACKER_LOG: http://pastebin.com/sgUPRkGs
Initial analysis tell me that the only error is a missing packerconfig file. Upon googling this the top searches tell me if it doesn't find one it defaults. Is this why it is not working?
Any help would be of great assistance. Apparently there are similar problems on the github support page (https://github.com/mitchellh/packer/issues) But I don't understand some of the solutions posted and if they apply to me.
I've tried to be as informative as I can. Happy to provide any information where I can!!
Thank you.
* ssh_private_key_file is invalid: stat ~/.ssh/id_rsa: no such file or directory
The "~" character isn't special to the operating system. It's only special to shells and certain other programs which choose to interpret it as referring to your home directory.
It appears that OpenStack doesn't treat "~" as special, and it's looking for a key file with the literal pathname "~/.ssh/id_rsa". It's failing because it can't find a key file with that literal pathname.
Update the ssh_private_key_file entry to list the actual pathname to the key file:
"ssh_private_key_file": "/home/someuser/.ssh/id_rsa",
Of course, you should also make sure that the key file actually exists at the location that you specify.
Have to leave a post here as this just bit me… I was using a variable with ~/.ssh/id_rsa and then I changed it to use the full path and when I did… I had a space at the end of the variable value being passed in from the command line via Makefile which was causing this error. Hope this saves someone some time.
Kenster's answer got you past your initial question, but it sounds like from your comment that you were still stuck.
Per my reply to your comment, Packer doesn't seem to support supplying a passphrase, but you CAN tell it to ask the running SSH Agent for a decrypted key if the correct passphrase was supplied when the key was loaded. This should allow you to use Packer to build with a protect SSH key as long as you've loaded it into SSH agent before attempting the build.
https://www.packer.io/docs/templates/communicator.html#ssh_agent_auth
The SSH communicator connects to the host via SSH. If you have an SSH
agent configured on the host running Packer, and SSH agent
authentication is enabled in the communicator config, Packer will
automatically forward the SSH agent to the remote host.
The SSH communicator has the following options:
ssh_agent_auth (boolean) - If true, the local SSH agent will be used
to authenticate connections to the remote host. Defaults to false.
I'm trying to use the new Socket API for Chrome extensions, and I'm encountering a confusing error. The manifest for my sample app looks like this:
{
"name":"Yet Another Socket App",
"version":"0.0.1",
"manifest_version":2,
"permissions":[
"experimental", "socket"
],
"app":{
"launch":{
"local_path":"index.html"
}
}
}
The app is loading (i.e., no error alerts), but a warning appears beneath its entry in chrome://extensions: 'socket' is not allowed for specified package type (theme, app, etc.).
Notes: index.html exists and is a simple HTML document (and chrome.socket is indeed undefined within it). I have enabled experimental APIs via chrome://flags. I am running the Dev channel of Chrome (v22.0.1229.6 dev) on Ubuntu.
Is this a momentary hiccup in socket support (this is the Dev channel, after all), or am I setting up my app incorrectly somehow? Also, I had to uninstall Chrome Stable to install Dev; is it possible that apt-get purge google-chrome-stable and rm -rf ~/.config/google-chrome was insufficient to clear out every piece of the Stable channel?
I had the same problem trying to recreate this example from Google:
http://developer.chrome.com/trunk/apps/app_network.html
Chrome always said
Invalid value for 'permissions[1]'
The correct manifest file is available in their sample App:
https://github.com/GoogleChrome/chrome-app-samples/tree/master/udp
After changing permissions in manifest.json to
"permissions": [
"experimental",
{"socket": [
"udp-send-to"
]},
"app.window"
]
I can now access the socket object in Chrome version 23 or higher:
var socket = chrome.socket || chrome.experimental.socket;