I have been trying to fetch chromium source code. However, I got stuck on gclient sync for 2 days.
gclient sync fails every time due to error related to SSL certificate verification failure.
LOG is as below:
rna#rna-P580:~/workspace/project$ gclient sync
Syncing projects: 98% (83/84), done.
________ running 'download_from_google_storage --no_resume --platform=linux* --no_auth --bucket chromium-gn -s src/buildtools/linux32/gn.sha1' in '/home/rna/workspace/project'
/home/rna/workspace/project/depot_tools/third_party/boto/pyami/config.py:75: UserWarning: Unable to load AWS_CREDENTIAL_FILE ()
warnings.warn('Unable to load AWS_CREDENTIAL_FILE (%s)' % full_path)
Failure: [Errno 1] _ssl.c:509: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed.
Error: Command download_from_google_storage --no_resume --platform=linux* --no_auth --bucket chromium-gn -s src/buildtools/linux32/gn.sha1 returned non-zero exit status 1 in /home/rna/workspace/project
I am guessing this happens because i am behind company firewall.
So I requested to open http & https. But still no luck.
Can someone help me out, please? I'm on ubuntu 13.10
I ran into this problem as well, what fixed it for me was doing: sudo apt-get update and sudo apt-get upgrade.
I modified DEPS below /trunk directory, comment some code as :
#{
# # Download test resources, i.e. video and audio files from Google Storage.
# "pattern": "\\.sha1",
# "action": ["download_from_google_storage",
# "--directory",
# "--recursive",
# "--num_threads=10",
# "--no_auth",
# "--bucket", "chromium-webrtc-resources",
# Var("root_dir") + "/resources"],
# },
,
and retry to run gclient runhooks and I can get a correct result.
FROM:
https://code.google.com/p/webrtc/issues/detail?id=3314
Related
(I have Centos 7 with samba-client.x86_64 4.6.2-8.el7 against windows server 2008 that is in a AD Domain controlled by separate windows server 2008 AD domain controller)
Started with this:
smbclient -W my.domain -U myuser //svr.my.domain/fred mypassword -c list
... which worked great, then decided to move domain,user and password into a file and use -A as described in the smbclient manpage. File windows-credentials, content:
username=myuser
domain=my.domain
password=mypassword
... with command line:
smbclient -A windows-credentials //svr.my.domain/fred -c list
.... did not work, gave error:
SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY
session setup failed: NT_STATUS_NO_MEMORY
... an hour on the internet suggested lots of people had this trouble and just about each had a different ticked answer, and none of them worked for me. Tried various combinations of their answers - in particular, https://askubuntu.com/questions/1008992/ubuntu-17-10-to-access-windows-files-shares-within-workplace-it, and ended up with...
Created a separate my.smb.conf with just:
[global]
# seems to get rid of
# SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY
client use spnego = no
# seems to get rid of
# session setup failed: NT_STATUS_NO_MEMORY
client ntlmv2 auth = no
... and used:
smbclient -s my.smb.conf -A windows-credentials //svr.my.domain/fred -c list
... and it looks like it works, but I'm not really sure as there seems to be credentials caching and a complete lack of information on how this stuff works or is supposed to work.
Can anyone actually explain any of this? Even if not, perhaps yet another answer to this problem will help someone somewhere.
This appears to be specific to Windows 2008. Attaching to Windows Server 2016 works without the modified smb.conf file. I have been unable to locate any real details.
In case of problems with smbclient
you can mount smb folder and use it like local folder
mount -t cifs //<ip>/<share folder>$ /mnt -o user=<user>,pass=<password>,domain=<workdomain>
I am using supervisor (3.2.0-2ubuntu0.1) to manage gunicorn with this very common configuration:
[program:app]
command = sudo gunicorn -w 1 -b 0.0.0.0:8000 application:app --error-logfile /var/log/gunicorn/error.log --access-logfile /var/log/gunicorn/access.log
directory = /home/ubuntu/app
user = ubuntu
Supervisor captures correctly logs from gunicorn and gunicorn generates correctly its own logs.
However, as soon as there is a 500 in the underlying api served by gunicorn, supervisor stops capturing the logs (while gunicorn captures correctly the issue in its error.log).
How do I fix this?
Turns out the issue was with the worker in python itself. If you try to log something that the logger cannot interpret, the logger becomes foobared and any further attempt to log is doomed.
I am trying to set up a BrowserQuest server that runs in openshift
I've been following this readme. Everything seems to go fine, I get to the end and run rhc app show bq and get the following output:
bq # http://bq-plantagenet.rhcloud.com/ (uuid: 55e4311189f5cf028d0000fc)
------------------------------------------------------------------------
Domain: plantagenet
Created: 8:18 AM
Gears: 1 (defaults to small)
Git URL: ssh://55e4311189f5cf028d0000fc#bq-plantagenet.rhcloud.com/~/git/bq.git/
SSH: 55e4311189f5cf028d0000fc#bq-plantagenet.rhcloud.com
Deployment: auto (on git push)
nodejs-0.10 (Node.js 0.10)
--------------------------
Gears: Located with smarterclayton-redis-2.6
smarterclayton-redis-2.6 (Redis)
--------------------------------
From: http://cartreflect-claytondev.rhcloud.com/reflect?github=smarterclayton/openshift-redis-cart
Website: https://github.com/smarterclayton/openshift-redis-cart
Gears: Located with nodejs-0.10
But when I try to access http://bq-plantagenet.rhcloud.com:8080/ in a browser, I get:
The connection has timed out
The server at bq-plantagenet.rhcloud.com is taking too long to respond
My questions are what is going wrong and how can I fix it? Many thanks for your consideration in reading through this and any suggestions you might have for resolving it
You need to access http://bq-plantagenet.rhcloud.com, leave off the port 8080, that is the port you listen on internally. You should also try checking your log files (https://developers.openshift.com/en/managing-log-files.html) to see what errors your application is producing.
To whom it may concern:
I am running CentOS 6.5 on my server.
I keep on receiving the following error when I type in yum update as the root user:
[root#dbtest /]# yum update
Loaded plugins: fastestmirror, security
Setting up Update Process
Loading mirror speeds from cached hostfile
epel/metalink | 14 kB 00:00
* epel: mirror.steadfast.net
* passenger: mirror.hmdc.harvard.edu
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/repodata/repomd.xml: [Errno 14]
PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: Updates-ambari-1.x. Please
verify its path and try again
I am using these two links to assist me:
https://www.webmaster.net/fix-pycurl-error-22-the-requested-url-returned-error-404-not-found/
To the best of my knowledge the reason I think I am getting this error is because there is something wrong with the ambari.repo repository file under the /etc/yum.repos.d directory.
My question is what can I do to fix the ambari.repo file, if anything, and what can I do so that I am able to successfully perform the yum update task without any errors?
This is what is inisde the ambari.repo file. Any help can be greatly appreciated.
One more thing I would like to mention is that I made changes to the CentOS-Base.repo file
That URL is incorrect. A quick search online for ambari repo led me to this page which seems to suggest that the correct path is now http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.2.3.7/ and that you can get a new repo file from http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.2.3.7/ambari.repo.
I have a Cpanel Server.
It send emails correctly expect from 1 domain which hosted on the server , so when I try to send email from that domain using roundcube or Horde I got the errror
SMTP Error (451): Failed to add recipient "recipient#exmple.com" (Temporary local problem - please try later).
does anyone know why and how to fix this?
I found the porblem:
After reviewing the file /var/log/exim_mainlog using
tail -f /var/log/exim_mainlog
I noticed that the error was:
2013-05-29 20:04:28 SMTP connection from [127.0.0.1]:36797 (TCP/IP connection count = 1)
2013-05-29 20:04:28 lowest numbered MX record points to local host: domain.com (while verifying <user#domain.com> from host localhost.localdomain (domain.com) [127.0.0.1]:36797)
2013-05-29 20:04:28 H=localhost.localdomain (domain.com) [127.0.0.1]:36797 sender verify defer for <user#domain.com>: lowest numbered MX record points to local host
2013-05-29 20:04:28 H=localhost.localdomain (domain.com) [127.0.0.1]:36797 F=<user#domain.com> A=dovecot_login:narena temporarily rejected RCPT <recipient#exmple.com>: Could not complete sender verify
2013-05-29 20:04:28 SMTP connection from localhost.localdomain (domain.com) [127.0.0.1]:36797 closed by QUIT
so the main problem was:
lowest numbered MX record points to local host
after couple of search I found the soluation in http://forums.cpanel.net/f5/lowest-numbered-mx-record-points-local-host-73563.html
which was to:
login to WHM and go to Main >> DNS Functions >> Edit MX Entry for the domain
set MX priority to 0 for the related domain and save.
I had the same problem after running a script to fix directory permissions on a cPanel-powered server (CentOS 6.5). I checked the logfile (tail -f /var/log/exim_mainlog) and found this error:
require_files: error for /home/user_name/etc/domain.com: Permission denied
Just ran the following command and the issue was fixed:
chown -R user_name:mail /home/user_name/etc/
Hope this helps someone.
check the the file /var/log/exim_mainlog to see more information about the error
tail -f /var/log/exim_mainlog
while trying to send email
Check your MX Entry in Cpanel, if the existing domain priority is less than or equals to 0, set it to 1. Mine is fixed. Hope it will help you.
Wow, after about an hour of searching and meddling with different files, I'd caution any novice not to venture out editing anything before you have a backup or image if your server, as you can cause irrevocable damage to your server. So many people talking garbage about what you should do or test without any real solution.
Anyways, here's what worked for me:
Real problem: Exim was updated to latest version which has loads of bugs like this issue.
How I fixed my server:
Authenticate to Linux via SSH and run the command lines through which we download and install the old version of EXIM.
Command Line 1: wget https://ca1.dynanode.net/exim-4.93-3.el7.x86_64.rpm
Command Line 2: rpm -Uvh --oldpackage exim-4.93-3.el7.x86_64.rpm
Command Line 3: systemctl restart exim
Command Line 4: Systemctl restart clamd
Command Line 5: systemctl restart spamassassin
Optional: just type "Reboot" to restart your server
The command lines above does the following:
Downloads the old package (I'm sure you can google other sources with this file)
Install the old package without prompt
Restart the Exim service
Restart the Clamd Service (AV)
Restart the spamassassin service (Spam Filter)
Restart outlook or whatever you use for mail client and send an email. Mine works, hope yours do too.