How to pass authentication stage of SMB - samba

I need to prepare a samba server and an SMB traffic captured between the samba server and smbclient. Then I want to replay the traffic to communicate with the server so I need to pass the auth stage.
First, I started the samba with the following config to disable authentication.
[global]
Map to guest = Bad User
ntlm auth = disabled
[sharedir]
path = /mount/
browsable = yes
read only = no
guest ok = yes
write list = all
Then, I used smbclient to connect the server via smbclient //172.17.0.1/sharedir -N and captured the traffic. However, the auth stage was not passed as shown in the captured traffic. What is the reason and how can I achieve the goal?

Related

How to configure interactive web terminal on docker+machine GitLab runners?

I've added the following configuration to the runner toml config:
[session_server]
listen_address = "[::]:8093" # listen on all available interfaces on port 8093
advertise_address = "runner-host-name.tld:8093"
session_timeout = 1800
The manager instance and workers are running in the same VPC on AWS. I have put the manager's private IP address in the advertise_address option. This address and port are reachable from worker machines. But when I click on the Debug button in the job, it opened the debug page with the black rectangle and nothing more, no shell appears in it. No errors or warnings related to the session server connectivity in the job log. What I'm doing wrong?

How to connect MySQLdb code to google cloud SQL?

I have a flask server which I want to deploy on Google Cloud Platform. The code uses MySQLdb library to connect with local MySQL instance in the following manner:
#app.route('/show_table', methods=['POST'])
def login():
db = MySQLdb.connect("localhost", "root", "", "db_name")
cursor = db.cursor()
query = "select * from table_name;"
cursor.execute(query)
res = cursor.fetchall()
return res, 200
But instead of local MySQL instance, I want to connect this code to Cloud SQL so that it reads data from the cloud. What changes should I make to this code? I have currently created a project in Google Cloud Platform and a Cloud SQL instance inside this project. I have also created the required tables inside this instance by following
this tutorial.
You shouldn't have to change your code too much, it just depends on how you're going to connect to the database. The Google documentation has step by step information on how to connect to Cloud SQL from an external application.
Since you're not using Java or GO, there's two options:
Use the Cloud SQL proxy
Whitelist the public IP address of your server on the Cloud SQL instance page
All the steps are in the documentation, but it basically says that if you use the proxy, you'll need to enable the Cloud SQL Admin API, install the proxy client on your local machine and authenticate it. There's a few authentication options but the recommended way is creating a credentials file from a service account using the console and passing the file as a parameter when you first start the proxy. Once you've got the proxy running the documentation has examples on how to connect using either TCP or UNIX sockets. With TCP you'll be using the proxy as localhost so you won't have to change your code. Using UNIX sockets you'll use the instance connection name which you'll find in your instance details on the GCP console. MySQLdb supports both.
With the second option you need to allow access to your Cloud SQL instance from a specific IP address range. Go to the connections tab in your Cloud SQL instance details page and add the IP address (using CIDR notation) you want to use to connect to your database. Once it's whitelisted then you can use the Public IP of your Cloud SQL instance, which you'll find in instance details, in place of localhost to connect to your database.

Multiple IPFS peers on the same machine

I am trying to setup multiple IPFS peers on the same Windows machine, in order to test file sharing and pubsub service.
I have created a different .ipfs folder for each peer, that is .ipfs1, .ipfs2.
In each config file i have replaced the ports 4001, 5001 and 8080 to not overlap.
So when i want to run all the daemons at the same time i open 2 console windows and input in each one:
set IPFS_PATH=C:\Users\MyName\.ipfsX (X = the peer number)
ipfs daemon --enable-pubsub-experiment
When i want to execute commands in a specific peer i open a new console window and type:
set IPFS_PATH=C:\Users\MyName\.ipfsX (X = the peer number)
cmd
So let's get to the problem. I want to run 2 peers, subscribe both to the same pubsub channel and exchange messages.
I have 6 open console windows, 3 for each peer:
1 for the running daemon
1 for executing sub and listening for messages
1 for inputing commands
The issue is that when i send a pubsub message, only the same peer receives it.
Only Peer1 listens to messages created by Peer1, etc.
Is there something wrong with my multi-peer setup? Any help would be appreciated.
A better approach is to use docker or VMs, the setup you described is very likely to cause issues. Try doing ipfs swarm peers to see if your nodes are connected to any of the peers.

Issue with Roundcube on Postfix, Dovecot, MySQL

I am seasoned with Ubuntu, Apache and MySQL but new to the email server world and an looking for some troubleshooting tips with my server configuration.
I am running Ubuntu 14.04 with Postfix, Dovecot and MySQL as instructed in this tutorial: https://www.digitalocean.com/community/tutorials/how-to-configure-a-mail-server-using-postfix-dovecot-mysql-and-spamassassin
with the exception of spamassassin.
I then installed postfixadmin to provide a graphical means of configuring my virtual postfix users/domains.
Thereafter I installed Roundcube as instructed in this tutorial: http://www.unixmen.com/install-configure-roundcube-webmail-ubuntu/ with the exception of the version (I am using 1.1.4). Everything checks out; I can Telnet into my mail server with accounts created using postfixadmin and can verify the mailbox(es) exists. The server receives email from external domains and can send as well. However, when I attempt to login to a verified user account via-Roundcube it fails. I have tried and tried again to find what is missing and have hit a wall.
Any suggestions would be greatly appreciated.
Best Regards,
-Joe
To debug, I would double check that RC is configured correctly to communicate with dovecot: this is the piece of software that is going to handle the authentication. To verify the software settings, one could switch on the debugging of both RC and dovecot.
Check RC configuration files to make sure that it is set up to connect to the right server and port. These settings can be found either in 'config.inc.php' or in 'defaults.inc.php' under the 'config' directory of RC. Look for IMAP section and the following strings:
$config['default_host'] = 'tls://localhost';
$config['default_port'] = 143;
$config['imap_auth_type'] = null;
Pay special attention to the 'tls://' ('ssl://') prefixes -- these control the usage of encryption during negotiation with IMAP server (tls issues STARTTLS command while connecting on a standard port, and ssl expects connection to be encrypted from the very start and thus is generally used to connect to a dedicated 'encryped' port): for the purposes of debugging one might want to disable encryption altogether. I would propose to use the same hostname and port as were used for telnetting.
If these settings seem to be right, one can proceed to debugging of IMAP connection from RC to dovecot. To enable debugging, edit defaults.inc.php once again:
$config['debug_level'] = 1;
$config['log_driver'] = 'syslog';
$config['syslog_id'] = 'roundcube';
$config['syslog_facility'] = LOG_MAIL;
$config['log_logins'] = true;
$config['imap_debug'] = true;
This would direct debug information of RC IMAP negotiation with dovecot to /var/log/mail.log, where you most probably would be able to identify the problem.

Using Google Compute API automated over as server

I'm using the Google client API library for Python. My code is running on an Ubuntu 14.04LTS server.
I have a working Google Compute project, were I created and downloaded a OAuth2.0 token to my server.
I'm trying to write a script that does the following:
Automatically (with no user interaction) authenticate to Google Compute engine.
create a new VM and then perform more actions...
My basic problem is using the OAuth2.0 authentication. It required user approval on a javascript supporting browser, and I want to do it automatically, on my server.
Using my code on my desktop works. A browser page pops up requiring my approval. On my server, I get the following message:
we have detected that your javascript is disabled in your browser
The code segment I use for authentication is:
# authenticate using the OAuth token
client_secret = os.path.join(
os.path.dirname(__file__),
self._oauth_token_path)
# set up a Flow object for the authentication
flow = client.flow_from_clientsecrets(
client_secret,
scope=scope,
message=tools.message_if_missing(client_secret))
# open credential storage path
credential_storage = file.Storage(self._credential_storage_path)
credentials = credential_storage.get()
# get credentails if necessary
if credentials is None or credentials.invalid:
credentials = tools.run_flow(flow, credential_storage, flags)
I read about service account access as a replacement of the regular OAuth2.0 authentication. Does any one know if that's the best way to go? any thoughts on how to do it better?
OAuth 2.0 requires user approval and is not the method to go for if you want to run your code/scripts automatically.
Service accounts are more suitable for this and are supported by the API (https://cloud.google.com/compute/docs/authentication#tools)
You create a service account + key in the developer console and use both to authenticate your application.