Redmine svn repository - can't auth from remote - mysql

Here's my setup:
Ubuntu Server 11.04
Apache 2.2.17
MySQL 5.1.54
RAILS_ENV=production /usr/share/redmine/script/about
About your application's environment
Ruby version 1.8.7 (i686-linux)
RubyGems version 1.3.7
Rack version 1.1
Rails version 2.3.11
Active Record version 2.3.11
Active Resource version 2.3.11
Action Mailer version 2.3.11
Active Support version 2.3.11
Edge Rails revision unknown
Application root /usr/share/redmine
Environment production
Database adapter mysql
Database schema version 20110511000000
/etc/apache2/sites-available/default
<VirtualHost *:80>
DocumentRoot /mnt/data/vortex/Dev/Web
ErrorLog ${APACHE_LOG_DIR}/apache_error.log
CustomLog ${APACHE_LOG_DIR}/apache_access.log combined
</VirtualHost>
/etc/apache2/sites-available/redmine
DocumentRoot /mnt/data/vortex/Dev/Web/redmine
PassengerDefaultUser www-data
RailsEnv production
RailsBaseURI /redmine
ErrorLog ${APACHE_LOG_DIR}/redmine_error.log
CustomLog ${APACHE_LOG_DIR}/redmine_access.log combined
/etc/apache2/conf.d/redmine-svn.conf
PerlLoadModule Apache::Authn::Redmine
<Location /svn>
DAV svn
SVNParentPath "/mnt/data/svn"
AuthType Basic
AuthName Redmine
Require valid-user
PerlAccessHandler Apache::Authn::Redmine::access_handler
PerlAuthenHandler Apache::Authn::Redmine::authen_handler
RedmineDSN "DBI:mysql:database=redmine_default;host=localhost"
RedmineDbUser "redmine"
RedmineDbPass "***"
</Location>
/etc/cron.d/redmine
*/10 * * * * root ruby /usr/share/redmine/extra/svn/reposman.rb --redmine localhost/redmine --svn-dir /mnt/data/svn --owner www-data --url file:///mnt/data/svn --key=***
Everything in Redmine is working fine, the repositories get created by reposman and can be browsed from their project page.
The problem arises when i try to access a svn repo via a remote pc.
If i type svn ls http://server-ip/svn/prj it shows me the repo content without asking for login.
With svn mkdir http://server-ip/svn/prj/dir instead it asks for password but as i enter it, I get prompted for login again. After the third try i get the following error:
svn: MKACTIVITY di '/svn/test1/!svn/act/25265483-dc10-4e3b-a7a5-a2e5bb84486f': authorization failed: Could not authenticate to server: rejected Basic challenge (http://192.168.1.201)
I can't figure out why authentication doesn't work.
I was expecting a login prompt also for the svn ls command.
I also checked the sessions on MySQL server and I can't see any for the user 'redmine' when i try to access the repository, so it seems Apache/Redmine don't even try to connect to MySQL for authentication.
I followed this guide to set up svn access.
Does someone knows how to fix my problem?
Thank you

I had the same issue. The problem is with the accounts that use LDAP authentication. If you create an internal account and add it to the project as a developer you will be able to commit.
To get Redmine, LDAP, and SVN to work you need to add "PerlLoadModule Authen::Simple::LDAP" to you apache configuration As mentioned here:
http://www.redmine.org/projects/redmine/wiki/Repositories_access_control_with_apache_mod_dav_svn_and_mod_perl#optional-LDAP-Authentication
You should have better luck on Ubuntu but to get Authen::Simple::LDAP installed on OpenSUSE 11.3 inside our corperate firewall I had to:
get CPAN to FTP in passive mode http://www.netadmintools.com/art273.html
configure CPAN and install in the following order
cpan> install Module::Implementation
cpan> install Attribute::Handlers
cpan> install Params::Validate
cpan> install Authen::Simple
cpan> install Authen::Simple::LDAP
After this it still was not working so I started debugging. I installed tcpdump and figured out that it was not using the configured port to do the authentication. I modified Redmine.pm from a hostname to a URL and that fixed it
# open (LEELOG, ">>/tmp/leelog");
# print LEELOG "-----------\n";
# print LEELOG "$rowldap[0]\n";
# print LEELOG "$rowldap[1]\n";
# print LEELOG "$rowldap[2]\n";
# print LEELOG "$rowldap[3]\n";
# print LEELOG "$rowldap[4]\n";
# print LEELOG "$rowldap[5]\n";
# print LEELOG "$rowldap[6]\n";
my $ldap = Authen::Simple::LDAP->new(
host => ($rowldap[2] eq "1" || $rowldap[2] eq "t") ? "ldaps://$rowldap[0]:$rowldap[1]" : "ldap://$rowldap[0]:$rowldap[1]",
port => $rowldap[1],
basedn => $rowldap[5],
binddn => $rowldap[3] ? $rowldap[3] : "",
bindpw => $rowldap[4] ? $rowldap[4] : "",
filter => "(".$rowldap[6]."=%s)"
This page is helpful:
http://www.rhonabwy.com/wp/2009/12/24/debugging-active-directory-ldap-authentication-in-redmine/

In my situation, I just unclick Settings-Information-Public and it works! Public projects don't need auth and I cannot commit.
I didn't use LDAP.

Related

Environment variable not found: DATABASE_URL. Prisma and mysql

I've developped an API with Node.Js, Express, Prisma and Mysql in local firstly. After that it works, I have deployed my API on Heroku and I took the ClearDB add-on to have a Mysql DB on Heroku.
So the deployment is OKAY when I go on my root root URI I have the "Cannot GET /" message, and when I try to connect to my ClearDB with MysqlWorkbench I have my tables, columns etc...
The main problem is from Prisma.
When I go to the "Run console" of my Heroku's project, the command npx prisma init works perfectly BUT when I type npx prisma migrate deploy || dev or also if I try to npx prisma db push I have this error =>
Error: Get Config: Schema parsing - Error while interacting with query-engine-node-api library
Error code: P1012
error: Environment variable not found: DATABASE_URL.
--> schema.prisma:10
|
9 | provider = "mysql"
10 | url = env("DATABASE_URL")
|
All my code is in a GitHub repo, I've configured my .env (which is in the root folder of my server) like this :
DATABASE_URL="mysql://<username>:<my-password>#eu-cdbr-west-30.cleardb.net/heroku_36d295ebb6686a2"
NODE_ENV="development"
APP_SECRET="jwtsecret12"
NODE_PATH="./src"
ACCESS_TOKEN_SECRET="651651651848754cdfce9fz8ef4ef54se8f4sef48s69ef84e"
I hope you have all the informations that you need to help me :)
PS : Locally my project works perfectly
Waiting for your answers, thank you very much !
Your .env file is irrelevant. It should not be used on Heroku (and should not be tracked in your repository).
ClearDB provides an environment variable called CLEARDB_DATABASE_URL, not DATABASE_URL. You can either change your code to use this variable instead of DATABASE_URL, or you can set DATABASE_URL to the same value:
Retrieve your database URL by issuing the following command:
heroku config | grep CLEARDB_DATABASE_URL
CLEARDB_DATABASE_URL => mysql://adffdadf2341:adf4234#us-cdbr-east.cleardb.com/heroku_db?reconnect=true
Copy the value of the CLEARDB_DATABASE_URL config variable.
If you’re using Ruby on Rails and the mysql2 gem, you will need to change the mysql:// scheme in the CLEARDB_DATABASE_URL to mysql2://
heroku config:set DATABASE_URL='mysql://adffdadf2341:adf4234#us-cdbr-east.cleardb.com/heroku_db?reconnect=true'
Adding config vars:
DATABASE_URL => mysql2://adffd...b?reconnect=true
Restarting app... done, v61.
The connection information for Heroku Postgres can change at any time, but since the ClearDB documentation provides the preceding guidance I would hope that it does not do so.

go-ethereum - geth - puppeth - ethstat remote server : docker: command not found

I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.

Specify JFROG_ACCESS home instead of ~/.jfrog_access (Artifactory 5.5.2)

I managed to set up artifactory using our existing tomcat. I have set to ARTIFACTORY_HOME=/opt/artifactory, that part works well. There is, however, also the jfrog access.war file, which needs to be running as well. I didn't figure out which variable to use to specify its home, therefore it defaults to ~/.jfrog_access, which is not at all what I like.
I moved the content over to my $ARTIFACTORY_HOME/access and symlinked it, but that's not the way to go for sure. Any help appreciated.
In case someone is stumbling over this thread and struggles with the same problem:
Solution for me was to also extract the Context files (access.xml and artifactory.xml which are available in the zip file under <zip extract>/misc/tomcat) to the Tomcat configuration folder, e.g. $CATALINA_HOME/conf/Catalina/localhost/. After that the $ARTIFACTORY_HOME env will be recognized on Access startup.
A previous answer finally put me on the right track for solving this problem on Amazon Linux.
In addition to copying access.xml and artifactory.xml to ${catalina.home}/host/MY_HOSTNAME, I found that some other changes were needed.
I modified the docBase attributes in the XML context files because my server has multiple hostnames:
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/access.xml
<Context path="/access" docBase="${catalina.home}/host/repo.mydomain.org/access.war">
<Parameter name="jfrog.access.bundled" value="true" override="true"/>
<!-- enable annotations scanning of access jar files -->
<JarScanner scanClassPath="false">
<JarScanFilter defaultPluggabilityScan="false" pluggabilityScan="access*" defaultTldScan="false"/>
</JarScanner>
</Context>
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/artifactory.xml
<Context crossContext="true" path="/artifactory" docBase="${catalina.home}/host/repo.mydomain.org/artifactory.war">
</Context>
Important Note: In order to prevent the above two XML files from being deleted by Tomcat Manager during upgrades via Undeploy/Deploy WAR, make sure they are owned by root and not writable by the tomcat user:
chown root.root access.xml artifactory.xml
chmod 644 access.xml artifactory.xml
If you forget to do the above, you will likely end up missing these files, which will break the communication between the access and artifactory web applications, resulting in login failures ("Username or Password Are Incorrect"). In this case, these errors result from the lack of communication between the web applications, not a problem with the credentials themselves.
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/manager.xml
This gives me the ability to upload new versions of access.war and artifactory.war via https://repo.mydomain.org:8443/manager/html:
<Context docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false">
</Context>
Additionally, I created the following folder to serve as the artifactory.home:
sudo mkdir /usr/share/artifactory
sudo chown tomcat.tomcat /usr/share/artifactory
tomcat8.conf
Add (or modify) the following line:
JAVA_OPTS="-Dartifactory.home=/usr/share/artifactory -Djfrog.access.home=/usr/share/artifactory/access -Dartifactory.access.client.serverUrl.override=http://localhost:8080/access"
Note: The Access Client URL specified above must use localhost in order to avoid the Server HTTP parameter from being overwritten by Apache and its modules. For instance, if I use:
https://repo.mydomain.org/access/api/v1/system/ping
The Server HTTP header value in the response is:
Server: Apache/2.4.33 (Amazon) OpenSSL/1.0.2k-fips mod_jk/1.2.43
And the Access Client produces the following exception:
[ERROR] (o.j.a.c.AccessClientImpl:154) - Access client/server version mismatch. Client version: 4.1.5, Server version: 2.4.33 (Amazon) OpenSSL
Which means the Access Client is depending on the first string matching #.#.# in the server header. This seems like a really fragile part of the Access Client. They should have used X-JFrog-Access-Server or something instead of trying to control a value that is set by the web server. So, to reiterate, use http://localhost:8080/access to connect directly to the tomcat server.
Artifactory 6.2.0 depends on Apache Derby (the specific version can be found in jfrog-artifactory-oss-6.2.0.zip\artifactory-oss-6.2.0\tomcat\lib). This should be added as a shared library to Tomcat:
mkdir /usr/share/tomcat8/shared
cd /usr/share/tomcat8/shared
wget http://central.maven.org/maven2/org/apache/derby/derby/10.11.1.1/derby-10.11.1.1.jar
Add or modify the following line in catalina.properties:
shared.loader=${catalina.home}/shared/*.jar
Since we want https://repo.mydomain.org to go to the Artifactory webapp:
mkdir /usr/share/tomcat8/host/repo.mydomain.org/ROOT
echo '<html><head><meta http-equiv="refresh" content="0;URL=/artifactory"></meta></head><body></body></html>' > /usr/share/tomcat8/host/repo.mydomain.org/ROOT/index.html
And make sure the services automatically start on reboot:
sudo chkconfig httpd on
sudo chkconfig tomcat8 on
Artifactory will then be available at the url:
https://repo.mydomain.org/artifactory/webapp/

CakePHP 3 - Enable SSL on development server [duplicate]

OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?
See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.
It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.
I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.
Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.

TC7 (20939) : upgrade : mercurial : http auth : Test Connection Succeeds... but build checks fail (http auth)

Have been using EAP 7 for a couple of months, this is the 2nd upgrade.
Upgraded to build 20939 today and now get errors when builds are trying to check mercurial for changes (VCS problem: FOO Edit this VCS root>>). If I edit the VCS Root and click Test Connection it succeeds. How do I go about debugging this issue?
Have tried re-saving the vcs root. I deleted and recreated the vcs root on one project and get the same result.
The recent entries in the teamcity-vcs log don't have domain\user:password, should they?
I now have both the teamcity and buildagent services running under my AD account. I don't remember what account the teamcity service was using before the upgrade (is that logged somewhere?).
If the vcs root is configured with an 'https://' and has user/password why don't I see the credentials in the log message (see above post)?
My user directory contains mercurial.ini / ssl cert (and was working pre-upgrade).
TeamCity hosted on Windows2k8, mercurial repo, using Active Directory credentials for authentication.
teamcity service is running as Local System
buildagent running as AD account (for builds that deploy to other machines)
newest errors:
[2012-01-11 17:12:39,578] WARN [cutor 4 {id=29}] - jetbrains.buildServer.VCS - Error while loading changes for root mercurial: https://mycompany.com/myproject {instance id=29, parent id=8}, cause: 'cmd /c hg pull https://mycompany.com/MyProject' command failed.
stderr: abort: http authorization required
older errors:
[2012-01-10 16:38:02,791] INFO [TeamCity Agent ] - jetbrains.buildServer.VCS - Patch applied for agent=computer {id=1, host=127.0.0.1:9090}, buildType=Project :: MVC3 {id=bt12}, root=mercurial: https://mycompany/myproject {instance id=12, parent id=1}, version=3775:7fc0ae5029e6
[2012-01-11 10:30:36,277] INFO [_Server_StartUp] - jetbrains.buildServer.VCS - Server-wide hg path is not set, will use path from the VCS root settings
The problem persisted after a complete uninstall/re-install.
In the VCS Root definition... I left the user/password fields blank and encoded the user:password into the 'Pull changes from' string (just like you'd do on the command-line.
https://domain\user:password#hg.mycompany.com/Repo
To sorta clean up the plaintext password I created a project level property 'MyPassword' (type password) and used it in the connection string like this:
https://domain\user:%MyPassword%#hg.mycompany.com/Repo
Still not great but I'm up and running and the password is not viewable by causal users.