Samba does not create homedirs automatically - samba

All shares on the samba server are accessible except the homes share.
The operating system is Ubuntu 18.04.
I have enabled pam_mkhomedir in /etc/pam.d/samba
Connecting to the homes share of user "administrator" makes smbd log this error:
canonicalize_connect_path failed for service administrator,
Is there anything I've forgotten to configure?

in /etc/samba/smb.conf has to be the following parameter:
obey pam restrictions = yes

Related

SASL Error connecting to remote libvirt over SSH: No worthy mechs found

I have a server running Ovirt Node that I'm trying to manage remotely using libvirt. I have an SSH keypair installed and can ssh user#server -i ssh-privkey successfully. When I try to connect to qemu+ssh//user#host/system?keyfile=ssh-privkey, I get this error:
authentication failed: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)
That led me down the path of getting TLS keys and certificates installed on the client and the server mostly according to these instructions (the configuration is slightly different because I have only one host and am using Terraform to manage the certificates*). However, I still get the same error. When I look at the output of libvirt --listen --verbose on the server when a connection failed, the only useful output is this:
error : virNetSocketReadWire:1792 : End of file while reading data: Input/output error
I have checked every firewall between the client and the server and they should all be wide open. What else could be the cause of this error?
* The goal is ultimately to use Terraform to provision libvirt resources, however I get the same errors trying to connect with virsh and virt-manager.
UPDATE: It's easier to connect just via SSH; this question exists because I couldn't figure out how to turn off SASL. It turns out SASL is enabled for SSH connections due to vdsm setting auth_unix_rw="sasl" in /etc/libvirt/libvirtd.conf. Removing that config means I can just use my SSH private key as I intended. The TLS configuration was a wild goose chase that was further hindered by vdsm changing the configured location of all the PKI files.
You're likely missing a RPM package on your client host. First on the virtualization host check /etc/sasl2/libvirt.conf and see what 'mech_list' setting is uncommented.
Back on your client you'll need to install a 'cyrus-sasl-XXXX' RPM that provides the same mechanism that the server is set to use. For a modern libvirt install it will probably be using 'cyrus-sasl-scram' for plain username/password auth, but for older installs, it might still be using 'cyrus-sasl-md5'

Connect to MySQL with Microsoft Power Bi Desktop over SSL

I have a MySLQ running on a CentOS server with SSL enabled and it require SSL in order to connect to the databases. I created the certificates and keys using OpenSSL, getting this files:
ca.pem
ca-key.pem
client-cert-pem
client-key-pem
server-cert.pem
server-key.pem
Setup MySQL with this:
ssl-ca=/etc/certs/ca.pem
ssl-cert=/etc/certs/server-cert.pem
ssl-key=/etc/certs/server-key.pem
bind-address=*
require_secure_transport=ON
I created a user that require X509 on the MySLQ by using:
CREATE USER 'user'#'%' IDENTIFIED BY '<password>' REQUIRE X509;
Testing with the MySQL client console and MySQL Workbench providing the client certs and it works fine. Also works on a Java App that writes/reads the databases by importing certifitates to the keytores/trustores.
However, I cannot set up Power Bi Desktop version to connect to the MySQL server. I imported the certificates to the Trusted Root Autenticathion Authorities and a PKCS12 keystore and trustore (used also by the Java App). This image shows the certificate. It is in Spanish, but it says it has also the key and it is verified by the ca.pem.
This is according to the documentation, but the documentation about this is very old and very limited. Some of the process and/or tools are out of date.
This are the sources I could find:
https://github.com/Microsoft/PowerBI-visuals/blob/master/tools/CreateCertificate.md#generate-certificate-manually
https://github.com/Microsoft/PowerBI-visuals/blob/master/tools/CertificateAddWindows.md
https://powerbi.microsoft.com/es-es/blog/ssl-security-error-with-data-source/
However there is not much more info about how to properly connect (or I cannot find it).
The message I get on Power Bi is "We were unable to authenticate you with the credentials provided. Try again."
I must add that disabling SSL allows me to connect to the databases using Power Bi, without any issue, it is the SSL what doesn't work as I don't know how to properly provide the certificates and I cannot find anything that decribes the process.

Not able to login from admin to fiware-idm after docker installation

I am integrating wirecloud and fiware-idm. Installed both through docker successfully. However, after installing fiware-idm, i am not able to login from admin. username - admin#test.com password - 1234.
Everytime it redirect it to "ip:3000/auth/login". Do I have to make any other configuration in wirecloud or fiware-idm?
Also, even after entering wrong credential, it redirects me to /auth/login and does not display any error message.
My wirecloud, fiware-idm and mysql database are in different containers. Is this can be the issue?
IdM should be deployed on production to be used by WireCloud. That is, you should configure the IDM service using public domains names, using https, and so on... Seems you are creating a local installation, so you should deploy some workarounds. Well, some of those requirements are not enforced by WireCloud, so it should be enough by ensure you use a domain name for accessing the IdM.
You can simulate having the idm server configured using public domains by adding the proper value to /etc/hosts (See this link if you are running windows), the correct value depends on how did you configured the IdM service. So, the idea is to ensure the domain used for accessing the idm resolves to the correct ip address both in the WireCloud container and from your local computer. We can provide you more detailed steps if you provide us more details about how are you launching the different containers.

Google cloud user accounts & linux groups don't get added to new bitnami lamp instances?

So I've went into permissions -> user accounts and added a linux group then added a couple of accounts and gave each of them an SSH key. I've then created a Bitnami LAMP instance, shouldn't that new instance have that group and those users and the users should be able to access them with their SSH keys?
Its description reads:
Create Linux user accounts to give yourself and others access to your
VM instances. A user account lets you log in to all Linux instances in
your project and has its own username and home directory on each
instance
In order for your Bitnami LAMP instance to recognize user accounts you need to enable it through startup script, as mentioned on this link:
User accounts are recognized only on instances that have been enabled to use user accounts. To create an instance that recognizes user accounts, run the following startup script when you start an instance. The startup script is located at:
gs://gcua-beta/startup.sh
Moreover, user accounts feature is in beta and is supported for the following operating systems. As such, you need to make sure your operating system is supported.
Backported Debian 7
Debian 8
Ubuntu 14.04
Ubuntu 14.10
Ubuntu 15.04

SQL Server NETWORK SERVICE account permissions

My SQL Server Windows service is set to use the NETWORK SERVICE account.
The server is installed to C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL.
However looking at the permissions on that folder, NETWORK SERVICE does not have any permissions. The groups which are allowed access to that folder are...
CREATOR OWNER - who is this?
SYSTEM - sounds fine - so that Windows can access the folder I presume?
SQLServerMSSQLUser$Computer_Name$MSSQLSERVER - this is the interesting one - what is this?
Administrators
Users
If NETWORK SERVICE is a user with minimal permissions on the system and looks to the O/S as someone connecting from a network how does it have permissions to access any files in the SQL Server install folder?
Thanks.
See Setting Up Windows Service Accounts in the SQL Server documentation:
SQL Server uses a security group to set resource ACLs rather than using the service account directly, so changing the service account can be done without having to repeat the resource ACL process. The security group can be a local security group, a domain security group or a service SID.
During SQL Server installation, SQL Server Setup creates a service group for each SQL Server component. These groups simplify granting the permissions that are required to run SQL Server services and other executables, and help secure SQL Server files.
Depending on the service configuration, the service account for a service or service SID is added as a member of the service group during install or upgrade.
That's what SQLServerMSSQLUser$Computer_Name$MSSQLSERVER is.
About NetworkService Account:
The NetworkService account is a predefined local account used by the service control manager.
...
A service that runs in the context of the NetworkService account presents the computer's credentials to remote servers.
NOT, as you put it:
looks to the O/S as someone connecting from a network