Asterisk sip.conf in MySQL database - mysql

I'm able to include a phone inside a database for realtime usage. So, this code (from /etc/asterisk/sip.conf):
[phone]
type=friend
username=phone
secret=12345
host=dynamic
disallow=all
allow=g729
allow=alaw
context=somecontext
nat=no
insecure=port,invite
it is now inside a database (using MySQL).
Now, I want to include a SIP trunk using the register directive, but I don't know how to do that.
How can I include register => <username>:<password>#<provider> inside the database as well?

You have 2 options.
1) static realtime. Just put in mysql line-by-line whole file
https://www.voip-info.org/asterisk-realtime-static
In this mode when you issue asterisk reload it just read from database line-by-line and interpret it as text file.
2) dynamic realtime.
In this mode asterisk check database only when have request for auth and only for matched peers.
https://www.voip-info.org/asterisk-realtime-sip/
Use regserver param to put your registration server.

The register directive should be a static entry in sip.conf [general] section, so while you could do this with static realtime, you may then have problems loading dynamic realtime users.
Your best option may be to use the #exec directive in sip.conf. This will allow you to run a script to read that register line from a db string.
To do this, you will need to enable 'execinclude = yes' in asterisk.conf and then add a line in sip.conf [general] section to run your script, like:
#exec /etc/asterisk/scripts/your_script_file
Here is a nice example from Leif Madsen using #exec to set externip= paramater via a php script:
https://leifmadsen.wordpress.com/2011/02/27/using-exec-to-set-externaddr-in-sip-conf/

Related

How to access an internal database in Bolt?

I have a custom contenttype called posts, which has about ten records. Each of them is stored in bolt_posts table together with other Bolt-specific tables. I'd like to access a post with id = 1 in one of my custom php files. The problem is that the database for my application is separated from the database which holds internal Bolt tables. Is there a native Bolt API that I could use in order to query those tables? I've found this https://docs.bolt.cm/3.1/extensions/storage/queries, but I am not sure what directives I need to put in my php code in order to be able to run such commands. Thanks in advance!
Assuming you're using composer to load Bolt into an existing application then this can be achieved by constructing an instance of a Bolt app.
All you need is a configuration, that takes the root folder of the Bolt site and then to initialize the app. For instance, assuming Bolt is accessible via autoload..
$config = new Bolt\Configuration\Composer('/path/to/bolt/root/');
$app = new Bolt\Application(['resources' => $config]);
$app->initialize();
That gets you the instance of a Bolt app and then you can follow the instructions in that documentation to query the Bolt db.
eg:
$record = $app['query']->getContent('pages/1');
if you want to access the content from within an external application, you need to connect to the bolt database manually and fetch the data. Bolt's own storage layer can only be used when you work directly in Bolt (e.g. within an extension).

How to pass directives to snappy_ec2 created clusters

We have a need to set some directives in the snappy config files for the various components (servers, locators, etc).
The snappy_ec2 scripts do a good job at creating all of the config's and keeping them in sync across the cluster, but I need to find a serviceable method to add directives to the auto generated scripts.
What is the preferred method using this script?
Example: Add the following to the 'servers' file:
-gemfirexd.disable-getall-local-index=true
Or perhaps I should add these strings to an environments file such as
snappy-env.sh
TIA
-doug
Have you tried adding the directives directly in the servers (or locators or leads) file and placing this file under (SNAPPY_DIR)/ec2/deploy/home/ec2-user/snappydata/? The script would read the conf files under this dir at the time of launching the cluster.
You'll need to specify it for each server you want to launch, with the name of server as shown below. See 'Specifying properties' section in README, if you have not already done so. e.g.
{{SERVER_0}} -heap-size=4096m -locators={{LOCATOR_0}}:9999,{{LOCATOR_1}}:9888 -J-Dgemfirexd.disable-getall-local-index=true
{{SERVER_1}} -heap-size=4096m -locators={{LOCATOR_0}}:9999,{{LOCATOR_1}}:9888 -J-Dgemfirexd.disable-getall-local-index=true
If you want it to be applied for all the servers, simply put it in snappy-env.sh as you mentioned (as SERVER_STARTUP_OPTIONS) and place the file under directory mentioned above.
We could have read the conf files directly from (SNAPPY_DIR)/conf/ instead of making users copy it to above location, but we may release the ec2 scripts as a separate package, in future, so that the users do not have to download the entire distribution.

Image synchronisation in a single Region

I would know something about fiware-glancesync component. I would like to synchronise only one image. I mean, I want to synchronise one single image in a region without modifying the current configuration file. How can I define a new configuration parameters (if it is possible) to do it with the GlanceSync?

The algorithm used to select the images can be defined by the user. The easiest
and best way to suncrhonise only one or a set of images is modifying the glancesync.conf configuration file inside ./conf directory. I recomend the creation of a new section [test] in order that you do not modify the current [master] section. Just write the following lines:
[test]
metadata_condition = image.name == 'GIS_GE'
credential= admin,<your secret>,http://130.206.112.3:5000/v2.0,admin
Keep in mind that '130.206.112.3' is the IP of the keystone service inside the FIWARE Lab, and the first and second admin are the OS_USERNAME and OS_TENANT_NAME. Last but not least 'your secret' is the password in base64 format.
And then, only execute the command:
./sync.py test:<name of the node, e.g. Lannion2>
See documentation in GlanceSync - Glance Synchronization Component in order to know more details about the image synchronization.
If you can obtain more information about the configuration of the GlanceSync, take a look to GlanceSync Configuration.

Software error while executing CGI script

I have a cgi script for upload which is as follows
#!/usr/bin/perl
use CGI;
use CGI::Carp qw(fatalsToBrowser);
my $cgi = new CGI;
my $file = $cgi->param('file');
$file=~m/^.*(\\|\/)(.*)/; # strip the remote path and keep the filename
my $name = $2;
open(LOCAL, ">/home/Desktop/$name") or die $!;
while(<$file>) {
$data .= $_;
}
print $cgi->header();
print "$file has been successfully uploaded... thank you.\n";
print $data;
The HTML file is as follows
<html>
<head>
<title>Test</title>
</head>
<body>
<form enctype="multipart/form-data" action="upload.cgi" method="post">
<input type="hidden" name="MAX_FILE_SIZE" value="30000" />
Send this file: <input name="userfile" type="file" />
<input type="submit" value="Send File" />
</form>
</body>
</html>
I am getting a weird error now..
Software error:
Is a directory at htdocs/upload.cgi line 9.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
TL;DR
Stop. Shelve your script right now. It is a gaping security hole just waiting to be exploited. Read the following resources:
perlsec
the CERT Perl Secure Coding Standard
(particularly the section on Input Validation and Data Sanitization)
the OWASP page on Unrestricted File Upload
the InfoSec page on Complete File Upload Vulnerabilities
the CWE page on Unrestricted Upload of File with Dangerous Type
the SANS recommendations for 8 Basic Rules to Implement Secure File Uploads.
When you have read--and understood--all of them, stop and think if you really need to let users upload files onto your server. Think long and hard. Can you really account for all of the listed vulnerabilities? If you still feel like you need to do this, consider enlisting the help of a security expert. Follow the guidelines laid out in the above resources carefully and understand that a mistake in your design could compromise your entire site.
I understand that this is just a test script, not a production application (at least, I really hope that's the case), but even so, what you are doing (and particularly how you are doing it) is a very, very bad idea. Here are a select few of
the reasons why, from OWASP's page on Unrestricted File
Upload:
The website can be defaced.
The web server can be compromised by uploading and executing a web-shell which can: run a command, browse the system files, browse the local resources, attack to other servers, and exploit the local vulnerabilities, and so on.
This vulnerability can make the website vulnerable to some other types of attacks such as XSS.
Local file inclusion vulnerabilities can be exploited by uploading a malicious file into the server.
More from OWASP:
Uploaded files represent a significant risk to applications. The first step in
many attacks is to get some code to the system to be attacked. Then the attack
only needs to find a way to get the code executed. Using a file upload helps
the attacker accomplish the first step.
The consequences of unrestricted file upload can vary, including complete
system takeover, an overloaded file system, forwarding attacks to backend
systems, and simple defacement.
Pretty scary stuff, huh?
The problems
Your code
Let's start by looking at some of the problems with the code you posted.
No strict, no warnings
Start putting use strict; use warnings; at the top of every Perl script you
ever write. I recently had the pleasure of fixing a CGI script that contained
a snippet something like this:
my ($match) = grep { /$usrname/ } #users;
This code was used to check that the username entered in an HTML form matched a
list of valid users. One problem: the variable $usrname was
misspelled (it should have been $username with an 'e'). Since strict
checking was off, Perl happily inserted the value of the (undeclared) global
variable $usrname, or undef. That turned the innocent-looking snippet into this monstrosity:
my ($match) = grep { // } #users;
which matches everything in the valid users list and returns the first
match. You could enter anything you wanted into the username field in the form
and the script would think you were a valid user. Since warnings were also off,
this was never caught during the development process. When you turn warnings on,
the script will still run and return a user, but you also get something like
this:
Name "main::usrname" used only once: possible typo at -e line 1.
Use of uninitialized value $usrname in regexp compilation at -e line 1.
When you also turn on strict, the script fails to compile and won't even run at
all. There are other problems with this snippet (for example, the string 'a' will match the username 'janedoe'), but strict and warnings at least alerted us to one major issue. I cannot stress this enough: always, always use strict; use
warnings;
No taint mode
The first rule of web development is,
"Always sanitize user input." Repeat after me: Always sanitize user input. One more time: Always sanitize user input.
In other words, never
blindly trust user input without validating it first. Users (even those that are not malicious) are very good at entering creative values into form
fields that can break your application (or worse). If you don't restrict their creativity,
there is no limit to the damage a malicious user can do to your site (refer to the perennial #1
vulnerability on the OWASP Top 10,
injection).
Perl's taint mode can help with this. Taint mode forces you
to check all user input before using it in certain potentially dangerous operations like the
system() function. Taint mode is like the safety on a gun: it can prevent a lot of painful
accidents (although if you really want to shoot yourself in the foot, you can
always turn off the safety, like when you untaint a variable without actually removing dangerous characters).
Turn on taint mode in every CGI script you ever write. You can enable it by passing the -T flag, like this:
#!/usr/bin/perl -T
Once taint mode is enabled, your script will throw a fatal error if you try to
use tainted data in dangerous situations. Here's an example of such a dangerous situation that I found in a random script on the internet:
open(LOCAL, ">/home/Desktop/$name") or die $!;
Ok, I lied, that snippet isn't from a random script, it's from your code. In isolation, this snippet is just begging to be hit with a directory traversal attack, where a malicious user enters a relative path in order to access a file that they shouldn't have access to.
Fortunately, you've done something right here: you ensured that $name will contain no directory separators by using a regex*. This is exactly what taint mode would require you to do. The benefit of taint mode is that if you forget to sanitize your input, you will be alerted immediately with an error like this:
Insecure dependency in open while running with -T switch at foo.cgi line 5
Like strict, taint mode forces you to address problems in your code immediately by causing the program to fail, instead of allowing it to quietly limp along.
* You did something right, but you also did some things wrong:
Your program will die if the user passes in only a filename with no directory separators, e.g. foo
You don't remove special characters that could be interpreted by a shell, like |
You never sanitize the variable $file and yet you try to use it to read a file later in your code
You don't check if the file you're writing to already exists (see "No check for file existence" below)
You allow the user to choose the name of the file that will be stored on your server, which gives them far more control than you should be comfortable with (see "Allowing the user to set the file name" below)
CGI::Carp fatalsToBrowser
I'll give you the benefit of the doubt on this one since you're still testing your script, but just in case you weren't aware and since I'm already talking about CGI security issues, never enable CGI::Carp's fatalsToBrowser option in a production environment. It can reveal intimate details about the inner workings of your script to attackers.
Two-argument open() and global filehandles
Two-argument open(), e.g.
open FH, ">$file"
has a host of security risks associated with it when users are allowed to specify the file path. Your script mitigates many of these by using a hard-coded directory prefix, but that in no way diminishes the fact that using two-argument open can be very dangerous. In general, you should use the three-argument form:
open my $fh, ">", $file
(which is still plenty dangerous if you allow the user to specify the file name; see "Allowing the user to set the file name" below).
Also note that instead of the global filehandle FH I switched to a lexical filehandle $fh. See CERT's page Do not use bareword filehandles for some reasons why.
No check for file existence
You don't check whether a file already exists at /home/Desktop/$name when you open it for writing. If the file already exists, you will truncate it (erase its contents) as soon as the open() call succeeds, even if you never write anything to the file. Users (malicious and otherwise) are likely to clobber each other's files, which doesn't make for a very happy user base.
No limit on file size
"But wait," you say, "I set MAX_FILE_SIZE in my HTML form!" Understand that this is merely a suggestion to the browser; attackers can easily edit HTTP requests to remove this condition. Never rely on hidden HTML fields for security. Hidden fields are plainly visible in the HTML source of your page and in the raw HTTP requests. You must limit the maximum request size on the server side to prevent users from loading massive files to your server and to help alleviate one type of denial of service attack. Set the $CGI::POST_MAX variable at the beginning of your CGI script like this:
$CGI::POST_MAX=1024 * 30; # 30KB
Or even better, find CGI.pm on your system and change the value of $POST_MAX to set it globally for all scripts that use the CGI module. That way you don't have to remember to set the variable at the beginning of every CGI script you write.
CGI doesn't match the HTML form
The POST variable you use for the file path in your HTML form, userfile, does not match the variable you look for in your CGI script, file. This is why your script is failing with the error
Is a directory
The value of
$cgi->param('file')
is undef so your script tries to open the path
/home/Desktop/
as a regular file.
Obsolete method for handling upload
You are using the old (and obsolete) method of handling uploads with CGI.pm where param() is used to get both the file name and a lightweight filehandle. This will not work with strict and is insecure. The upload() method was added in v2.47 (all the way back in 1999!) as a preferred replacement. Use it like this (straight out of the documentation for CGI.pm):
$lightweight_fh = $q->upload('field_name');
# undef may be returned if it's not a valid file handle
if (defined $lightweight_fh) {
# Upgrade the handle to one compatible with IO::Handle:
my $io_handle = $lightweight_fh->handle;
open (OUTFILE,'>>','/usr/local/web/users/feedback');
while ($bytesread = $io_handle->read($buffer,1024)) {
print OUTFILE $buffer;
}
}
where field_name is the name of the POST variable that holds the file name (in your case, userfile). Notice that the sample code does not set the output filename based on user input, which leads to my next point.
Allowing the user to set the file name
Never allow users to choose the file name that will be used on your server. If an attacker can upload a malicious file to a known location, it becomes significantly easier for them to exploit. Instead, generate a new, unique (to prevent clobbering), difficult-to-guess file name, preferably in a path outside your web root so users cannot access them directly with a URL.
Other issues
You haven't even begun to address the following issues.
Authentication
Who is allowed to upload files using your web app? How will you ensure that only authorized users are uploading files?
Access control
Are users allowed to see the files uploaded by other users? Depending on the file content, there could be major privacy issues at stake.
Number and rate of uploads
How many files is one user allowed to upload? How many files is a user allowed to upload in a fixed period of time? If you don't restrict these, one user could easily eat up all of your server resources very quickly, even if you enforce a maximum file size.
Dangerous file types
How will you check that users are not uploading dangerous content (for example, executable PHP code) to your server? Simply checking the file extension or content type header is not enough; attackers have found some very creative methods for circumventing such checks.
"But, but, I'm only running this on my corporate intranet..."
You may be tempted to disregard these security issues if your script is not accessible from the internet. However, you still need to consider
In-office pranksters
Disgruntled coworkers
Collaborators and outside contractors who either need access to your app or who shouldn't have access
Managers who love your app so much that they decide to open it up to users on the internet without your knowledge, possibly after you've transferred to another group or left the company
"What should I do?"
Scrap your existing code. Read the resources I listed in the first paragraph, carefully. Here they are again:
perlsec
the CERT Perl Secure Coding Standard (particularly the section on Input Validation and Data
Sanitization)
OWASP's Unrestricted File Upload
InfoSec's Complete File Upload Vulnerabilities
CWE's Unrestricted Upload of File with Dangerous Type
SANS recommendations for 8 Basic Rules to Implement Secure File Uploads
Consider carefully if you really need to do this. If you just need to give users a place to store files, consider using (S)FTP instead. This would certainly not eliminate all of the security risks, but it would eliminate a big one: your custom CGI code.
If after careful consideration you still think this is necessary, work through some recent Perl tutorials to make sure you can use and understand modern Perl programming conventions. Instead of CGI.pm, use a framework like Catalyst, Dancer, or Mojolicious, all of which have plugins that can handle tricky areas like user authentication and sessions so you don't have to re-invent the wheel (poorly).
Follow all of the security guidelines listed in the above resources and consider enlisting the help of an expert in web security. Tread carefully: a single mistake in your code could allow an attacker to compromise your entire site and possibly even other machines on your network. Depending on what country your company and your users are in, this could even have legal ramifications.
</soapbox>
A few suggestions that might get you closer to a solution.
use strict and use warnings (always use these).
CGI->new() instead of new CGI (not essential, but a good habit to get into).
Your file upload form input is called "userfile", but in your CGI code you call it "file". That inconsistency needs to be fixed.
You get the filename with $cgi->param('file'), that gets you the filename correctly (well, once you've fixed the error in my previous point), but you later try to treat that filename as a file handle (while (<$file>)). That's not going to work.
You should probably read the documentation about how to process a file upload field using CGI.pm.

MS Access - Open database form from URL

I'm trying to open a form from an url. This ms access database will be hosted on a shared folder in an network, and the costumer has asked me if it's possible to open an database form (i'll have to pass an ID).
If this were in web environment i would do this without any problem, but honestly in ms access i have no idea how to do this.
Can someone help me?
Have a look at Register protocol and Registering an Application to a URL Protocol. They have a example registry file on how to register a protocol:
REGEDIT4
[HKEY_CLASSES_ROOT\foo]
#="URL:foo Protocol"
"URL Protocol"=""
[HKEY_CLASSES_ROOT\foo\shell]
[HKEY_CLASSES_ROOT\foo\shell\open]
[HKEY_CLASSES_ROOT\foo\shell\open\command]
#="\"C:\\Program Files\\Application\\program.exe\" \"%1\""
You can change the last line to something like:
#="\"C:\\Program Files\\Office\\access.exe\" \"C:\\path\\to\\your\\db.mdb\" /cmd \"%1\""
If you URL is foo:241245, the following command is called:
"C:\Program Files\Office\access.exe" "C:\path\to\your\db.mdb" /cmd "241245"
In Access, the commandline arguments are returned by the Command function:
In the direct window:
?Command
241245
The database can be opened from a URL like any other file:
file://server/share/path/database.mdb
This won't work if the database has user-level security on it though. I've only ever done that by using a windows shortcut.
If you're not using user-level security and the URL works, you can set the desired form to open automatically on load by going to the Access Options screen and the Current Database tab, then selecting the desired form from the Display Form drop-down list.
Oops - I just noticed that you said you'd need to pass an ID. I don't know if that's possible using a URL.
Open your Access database from the network location (i.e., with a UNC path, not from a drive letter, or locally).
Navigate so you can see the form listed in your database.
Drag the form to your desktop. A shortcut directly to the form will be created there.
I don't think this is a good idea, though. It's a substitute for a user interface in your Access application. Additionally, your description of the problem sounds like you're intending to have multiple people opening the same database file. This is a really bad practice -- best practice is for the database to be split (back end with data tables only on the server, and individual copy of the front end with forms/reports/etc. on each user's workstation), and more than one user should never be opening the same front end at the same time.