I have an app that writes a set of GPS strings to a text file like this:
[{"date":"02/13/2017 19:26:00","time":1486974360428,"longitude":151.209900,"latitude":-33.865143}{"date":"02/13/2017 19:26:13","time":1486974373496,"longitude":151.209900,"latitude":-33.865143}{"date":"02/13/2017 19:26:23","time":1486974383539,"longitude":151.209900,"latitude":-33.865143}{"date":"02/13/2017 19:26:33","time":1486974393449,"longitude":151.209900,"latitude":-33.865143}{"date":"02/13/2017 19:26:43","time":1486974403423,"longitude":151.209900,"latitude":-33.865143}{"date":"02/13/2017 19:26:53","time":1486974413483,"longitude":151.209900,"latitude":-33.865143}]
the file always starts and ends with [].
This file gets uploaded to an Ubuntu server at
'filepath'/uploads/gps/'device ID'/'year-month-day'/'UTC download time'.txt
for example
/uploads/gps/12/2017-02-12/1486940878.txt
The text files get created when the file gets uploaded to the server, so there are multiple files written per day.
I would like a method to write the values to a MySQL database with the headings DEVICE (obtained from the filepath), DATE, TIME, LONGITUDE, LATITUDE.
Initially, just a command I can run on the server would be preferable, which I can eventually run from a PHP command on an admin panel.
Where do I start?
Instead of uploading, you could easily submit the text to a PHP program on the server. It would use JSON decode to convert it to an array, and then save each record to a table. The device ID would be one of the parameters to the script.
Using this type of approach would eliminate a lot of issues such as not importing a file twice, renaming/moving the files after import, finding the file(s), etc.
It would also mean your data is up to date every time the data is sent.
A script like that would be pretty trivial to write, but it should have some type of security built in to prevent data from being sent by an unauthorized entity.
Here's some sample code that will process the files and store them to the DB. I've removed certain info (user ID/password database name) that you will need to edit. It's a little longer that I guessed, but still pretty short. If you need more info, PM me.
<?php
/* ===============================================================
Locate and parse GPS files, then store to MySQL DB.
Presumes a folder stucture of gps/device_id/date:YYYY-MM-DD.
After a file is processed and stored in the DB table, the
file is renamed with a leading "_" so it will be ignored later.
===============================================================
*/
$DS = '/'; // Directory separator character. Use '/' for Linux, '\' for windows.
// Path to folder containing device folders.
$base_folder = "./gps";
// Today's date foratted like the folders under the devices. If parameter "date" has a value, use it instead of today's date. Parameter MUST be formatted correctly.
$today = isset($_REQUEST['date']) && $_REQUEST['date'] != '' ? $_REQUEST['date'] : date('Y-m-d');
// Get a list of device folders
$device_folders = get_folders($base_folder);
// Loop through all of the device folders
$num_file_processed = 0;
foreach($device_folders as $dev_folder) {
// Check to see if there is a folder in the device folder for today.
$folder_path = $base_folder.$DS.$dev_folder.$DS.$today;
// Check if the device/date folder exists.
if(file_exists($folder_path) && is_dir($folder_path)) {
// Folder exists, get a list of files that haven't been processed.
$file_list = get_files($folder_path);
// Process the files (if any)
foreach($file_list as $filename) {
$f_path = $folder_path.$DS.$filename;
$json = file_get_contents($f_path);
// Fix the JSON -- missing "," between records.
$json = str_replace("}{","},{",$json);
$data = json_decode($json);
// Process each row of data and save to DB.
$num_saved = 0;
$rec_num = 0;
foreach($data as $recno => $rec_data) {
if(save_GPS($dev_folder,$rec_data->date,$rec_data->time,$rec_data->longitude,$rec_data->latitude)) {
$num_saved++;
}
$rec_num++;
}
// Rename file so we can ignore it if processing is done again.
if($num_saved > 0) {
$newName = $folder_path.$DS."_".$filename;
rename($f_path,$newName);
$num_file_processed++;
}
}
} else {
echo "<p>" . $folder_path . " not found.</p>\n";
}
}
echo "Processing Complete. ".$num_file_processed." files processed. ".$num_saved." records saved to db.\n";
function save_GPS($dev_id,$rec_date,$rec_time,$long,$lat) {
$server = "localhost";
$uid = "your_db_user_id";
$pid = "your_db_password";
$db_name = "your_database_name";
$qstr = "";
$qstr .= "INSERT INTO `gps_log`\n";
$qstr .= "(`device`,`date`,`time`,`longitude`,`latitude`)\n";
$qstr .= "VALUES\n";
$qstr .= "('".$dev_id."','".$rec_date."','".$rec_time."','".$long."','".$lat."');\n";
$db = mysqli_connect($server,$uid,$pid,$db_name);
if(mysqli_connect_errno()) {
echo "Failed to connect to MySQL server: " . mysqli_connect_errno() . " " . mysqli_connect_error() . "\n";
return false;
}
// Connected to DB, so save the record
mysqli_query($db,$qstr);
mysqli_close($db);
return true;
}
function get_folders($base_folder) {
$rslts = array();
$folders = array_map("htmlspecialchars", scandir($base_folder));
foreach($folders as $folder) {
// Ignore files and folders that start with "." (ie. current folder and parent folder references)
if(is_dir($base_folder."/".$folder) && substr($folder,0,1) != '.') {
$rslts[] = $folder;
}
}
return $rslts;
}
function get_files($base_folder) {
$rslts = array();
$files = array_map("htmlspecialchars", scandir($base_folder));
foreach($files as $file) {
// Ignore files and folders that start with "." (ie. current folder and parent folder references), or "_" (files already processed).
if(!is_dir($file) && substr($file,0,1) != '.' && substr($file,0,1) != '_') {
$rslts[] = $file;
}
}
return $rslts;
}
Related
Ok, so here's the web page: https://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/vfr/
What I want to do is download the source of that web page (the equivalent of right-clicking in a browser and selectin View Source), but I need to do it in a batch file without the use of outside tools like wget. I know how to download files using bitsadmin in a batch file, but I'm running into trouble because I don't know the actual URL of the web page. I've tried adding index.html and index.htm and all sorts of page names to the end and none of the are valid. So how can I find the ACTUAL page name to download?
More info for those who care: the purpose is to parse the code to determine the ever-changing filenames of the GEO-TIFF files on the page, then download those files automatically (rather than needing to manually right-click on each file and save-as about 55 times).
You can use curl.
When you type curl followed by an HTTP address the output will be the source code of the page.
curl http://yourAddress.com > tmp.txt
The result will be stored in a tmp.txt file.
You could use the Microsoft.XMLHTTP COM object in Windows Scripting Host (VBScript or JScript). Here's a hybrid Batch + JScript example (should be saved with a .bat extension):
#if (#CodeSection == #Batch) #then
#echo off & setlocal
set "url=https://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/vfr/"
cscript /nologo /e:JScript "%~f0" "%url%"
goto :EOF
#end // end Batch / begin JScript
var xhr = WSH.CreateObject('Microsoft.XMLHTTP');
xhr.open('GET', WSH.Arguments(0), true);
xhr.setRequestHeader('User-Agent','XMLHTTP/1.0');
xhr.send('');
while (xhr.readyState != 4) WSH.Sleep(50);
WSH.Echo(xhr.responseText);
Example usage would be something like scriptname.bat > saved.html. Or since you're going this far, you might as well let JScript turn that raw HTML data into something useful. Here's an example that scrapes all the tables on that page using DOM methods, builds an object of the table data, then serializes it into JSON for easier parsing or deserialization by other tools:
#if (#CodeSection == #Batch) #then
#echo off & setlocal
set "url=https://www.faa.gov/air_traffic/flight_info/aeronav/digital_products/vfr/"
cscript /nologo /e:JScript "%~f0" "%url%"
goto :EOF
#end // end Batch / begin JScript
var xhr = WSH.CreateObject('Microsoft.XMLHTTP'),
DOM = WSH.CreateObject('htmlfile'),
JSON, obj = {};
xhr.open('GET', WSH.Arguments(0), true);
xhr.setRequestHeader('User-Agent','XMLHTTP/1.0');
xhr.send('');
while (xhr.readyState != 4) WSH.Sleep(50);
DOM.write('<meta http-equiv="x-ua-compatible" content="IE=9" />'
+ xhr.responseText);
JSON = DOM.parentWindow.JSON;
var tables = DOM.getElementsByTagName('table');
for (var i=0; i<tables.length; i++) {
var cols = [],
rows = tables[i].rows,
caption = tables[i].caption ? tables[i].caption.innerText : i;
for (var j=0; j<rows.length; j++) {
if (!cols.length) {
for (var k=0; k < rows[j].cells.length; k++) {
var cell = rows[j].cells[k].innerText;
cols.push(cell);
}
obj[caption] = {};
} else {
var row = rows[j].cells[0].innerText;
obj[caption][row] = {};
for (var k=1; k < rows[j].cells.length; k++) {
var a = rows[j].cells[k].getElementsByTagName('a'),
links = new DOM.parentWindow.Array();
if (a && a.length) {
for (var l=0; l<a.length; l++) links.push(a[l].href);
obj[caption][row][cols[k]] = links;
} else {
obj[caption][row][cols[k]] = rows[j].cells[k].innerText;
}
}
}
}
}
WSH.Echo(JSON.stringify(obj, null, ' '));
DOM.close();
That lets you do neat stuff like query the data in a hierarchical structure, like this PowerShell script (saved with a .ps1 extension):
add-type -as System.Web.Extensions
$JSON = New-Object Web.Script.Serialization.JavaScriptSerializer
$data = cmd /c test.bat
$obj = $JSON.DeserializeObject($data)
$obj['Helicopter Route Charts']['Boston']['Current Edition No. and Date']
This all works with functionality built into Windows without requiring any 3rd party applications or downloads beyond the web request to faa.gov.
When will the database connection be closed in prestashop 1.6.1.3 after a db instance is created by $db = Db::getInstance();
Do I need to close the database connection manually by writing any code db close function?
Or the db class in prestashop will handle this?
Actually when will be the PrestaShop db connection will be closed after a db object is created by $db = Db::getInstance();?
See below code which is a simple php file in my root directory of prestashop to update one of my tables and this page is called every one minute by cron job task ,here I am not closing the connection anywhere ,do we need to close it ?
$CheckStatusSql = "select * from ticket_status where item_id='$ItemID' and ticket_series='$TicketSeries' and status='BOOKED' ";
$db = Db::getInstance();
$result = $db->executeS($CheckStatusSql, false);
$ChangeStatus ='';
while ($row = $db->nextRow($result)) {
$status = $row['status'];
$booked_on = $row['booked_on'];
$ticket_no = $row['ticket_no'];
$to_time = strtotime(date("Y-m-d H:i:s"));// Time Now
$from_time = strtotime($booked_on); //Booked Time
$time_diff_minutes=round(abs($to_time - $from_time) / 60,2);
if($time_diff_minutes>$checkMinutes){
$ChangeStatus=$ChangeStatus."Booked ticket no: '".$ticket_no."' exceeds 30 Minutes and its now about ".$time_diff_minutes." minutes, status changed to AVAILABLE<br><br\>";
$updateSql = "UPDATE ticket_status SET status = 'AVAILABLE', booked_on = NULL WHERE item_id='$ItemID' and ticket_series='$TicketSeries' and status='BOOKED' and ticket_no='$ticket_no'";
$bookResult = $db->executeS($updateSql, false);
}
}
That is I am just including the config file (require 'config/config.inc.php';) and creating a db object and then executing my query as shown below :
require 'config/config.inc.php';
$checkMinutes = 30;// In minutes
$checkTimeInSeconds = $checkMinutes*60;
$sql = 'SELECT * FROM ps_ticket WHERE status=5';
$db = Db::getInstance();
$result = $db->executeS($sql, false);
$i=1;
while ($row = $db->nextRow($result)) {
$time = strtotime($row['hold_on']);
$curtime = time();
if(($curtime-$time) > $checkTimeInSeconds) { ///3600 seconds
$sql = 'UPDATE `'._DB_PREFIX_.'lopp_ticket`
SET
`id_customer` = 0,
`hold_on`=0,
`status` = 1
WHERE `ticket_id` = '.$row['ticket_id'];
if(Db::getInstance()->execute($sql)) {
echo $row['ticket_id'].' Updated'.'<br>';
}
}
else {
echo $row['ticket_no'].'No'.'<br>';
}
$i++;
}
So here do I need to close the db connection anywhere in the above code or PrestaShop will handle itself?
Because the server admin is saying too many database sessions are been opened by our code ,
Also Is there anyway to check from where too many db sessions are open/active always ?
As far as i know i never have closed a DB connection in Prestashop.
There documentation also does not explicitly state to close each DB request.
Looking into there source code they also never run a close command after a DB connection.
Looking into the classes\db\DbMySQLi.php class we can find the function below.
/**
* Destroys the database connection link.
*
* #see DbCore::disconnect()
*/
public function disconnect()
{
#$this->link->close();
}
Then we will look into classes\db\Db.php where we find that the function $this->disconnect() is called. So its safe to say they will close all there DB connections automatically.
/**
* Closes connection to database.
*/
public function __destruct()
{
if ($this->link) {
$this->disconnect();
}
}
I have multiple folders (six or so) with multiple .CSV files in them. The CSV files are all of the same format:
Heading1,Heading2,Heading3
1,Monday,2.45
2,Monday,3.765...
Each .CSV has the same heading names [same data source for different months]. What is the best way to import these CSVs into SQL Server 2008? The server does not have xpShell configured [for security reasons which I cannot modify], so any method which uses that (which I originally tried), will not work.
EDIT
The CSV files are a maximum of 2mb in size and do not contain any commas (other than those required for delimiters).
Any ideas?
F.e. you got CSV file names sample.csv on D:\ drive, with this inside:
Heading1,Heading2,Heading3
1,Monday,2.45
2,Monday,3.765
Then you can use this query:
DECLARE #str nvarchar(max),
#x xml,
#head xml,
#sql nvarchar(max),
#params nvarchar(max) = '#x xml'
SELECT #str = BulkColumn
FROM OPENROWSET (BULK N'D:\sample.csv', SINGLE_CLOB) AS a
SELECT #head = CAST('<row><s>'+REPLACE(SUBSTRING(#str,1,CHARINDEX(CHAR(13)+CHAR(10),#str)-1),',','</s><s>')+'</s></row>' as xml)
SELECT #x = CAST('<row><s>'+REPLACE(REPLACE(SUBSTRING(#str,CHARINDEX(CHAR(10),#str)+1,LEN(#str)),CHAR(13)+CHAR(10),'</s></row><row><s>'),',','</s><s>')+'</s></row>' as xml)
SELECT #sql = N'
SELECT t.c.value(''s[1]'',''int'') '+QUOTENAME(t.c.value('s[1]','nvarchar(max)'))+',
t.c.value(''s[2]'',''nvarchar(max)'') '+QUOTENAME(t.c.value('s[2]','nvarchar(max)'))+',
t.c.value(''s[3]'',''decimal(15,7)'') '+QUOTENAME(t.c.value('s[3]','nvarchar(max)'))+'
FROM #x.nodes(''/row'') as t(c)'
FROM #head.nodes('/row') as t(c)
To get output like:
Heading1 Heading2 Heading3
1 Monday 2.4500000
2 Monday 3.7650000
At first we take data as SINGLE_CLOB with the help of OPEROWSET.
Then we put all in #str variable. The part from beginning to first \r\n we put in #head, the other part in #x with conversion to XML. Structure:
<row>
<s>Heading1</s>
<s>Heading2</s>
<s>Heading3</s>
</row>
<row>
<s>1</s>
<s>Monday</s>
<s>2.45</s>
</row>
<row>
<s>2</s>
<s>Monday</s>
<s>3.765</s>
</row>
After that we build dynamic query like:
SELECT t.c.value('s[1]','int') [Heading1],
t.c.value('s[2]','nvarchar(max)') [Heading2],
t.c.value('s[3]','decimal(15,7)') [Heading3]
FROM #x.nodes('/row') as t(c)
And execute it. Variable #x is passing as parameter.
Hope this helps you.
I ended up solving my problem using a non-SQL answer. Thank you everyone who helped contribute. I apologise for going with a completely off-field answer using PHP. Here is what I created to solve this problem:
<?php
//////////////////////////////////////////////////////////////////////////////////////////////////
// //
// Date: 21/10/2016. //
// Description: Insert CSV rows into pre-created SQL table with same column structure. //
// Notes: - PHP script needs server to execute. //
// - Can run line by line ('INSERT') or bulk ('BULK INSERT'). //
// - 'Bulk Insert' needs bulk insert user permissions. //
// //
// Currently only works under the following file structure: //
// | ROOT FOLDER //
// | FOLDER 1 //
// | CSV 1 //
// | CSV 2... //
// | FOLDER 2 //
// | CSV 1 //
// | CSV 2... //
// | FOLDER 3... //
// | CSV 1 //
// | CSV 2... //
// //
//////////////////////////////////////////////////////////////////////////////////////////////////
//Error log - must have folder pre-created to work
ini_set("error_log", "phplog/bulkinsertCSV.php.log");
//Set the name of the root directory here (Where the folder's of CSVs are)
$rootPath = '\\\networkserver\folder\rootfolderwithCSVs';
//Get an array with the folder names located at the root directory location
// The '0' is alphabetical ascending, '1' is descending.
$rootArray = scandir($rootPath, 0);
//Set Database Connection Details
$myServer = "SERVER";
$myUser = "USER";
$myPass = "PASSWORD";
$myDB = "DATABASE";
//Create connection to the database
$connection = odbc_connect("Driver={SQL Server};Server=$myServer;Database=$myDB;", $myUser, $myPass) or die("Couldn't connect to SQL Server on $myServer");
//Extend Database Connection timeout
set_time_limit(10000);
//Set to true for bulk insert, set to false for line by line insert
// [If set to TRUE] - MUST HAVE BULK INSERT PERMISSIONS TO WORK
$bulkinsert = true;
//For loop that goes through the folders and finds CSV files
loopThroughAllCSVs($rootArray, $rootPath);
//Once procedure finishes, close the connection
odbc_close($connection);
function loopThroughAllCSVs($folderArray, $root){
$fileFormat = '.csv';
for($x = 2; $x < sizeof($folderArray); $x++){
$eachFileinFolder = scandir($root."\\".$folderArray[$x]);
for($y = 0; $y < sizeof($eachFileinFolder); $y++){
$fullCSV_path = $root."\\".$folderArray[$x]."\\".$eachFileinFolder[$y];
if(substr_compare($fullCSV_path, $fileFormat, strlen($fullCSV_path)-strlen($fileFormat), strlen($fileFormat)) === 0){
parseCSV($fullCSV_path);
}
}
}
}
function parseCSV($path){
print_r($path);
print("<br>");
if($GLOBALS['bulkinsert'] === false){
$csv = array_map('str_getcsv', file($path));
array_shift($csv); //Remove Headers
foreach ($csv as $line){
writeLinetoDB($line);
}
}
else{
bulkInserttoDB($path);
}
}
function writeLinetoDB($line){
$tablename = "[DATABASE].[dbo].[TABLE]";
$insert = "INSERT INTO ".$tablename." (Column1,Column2,Column3,Column4,Column5,Column6,Column7)
VALUES ('".$line[0]."','".$line[1]."','".$line[2]."','".$line[3]."','".$line[4]."','".$line[5]."','".$line[6]."')";
$result = odbc_prepare($GLOBALS['connection'], $insert);
odbc_execute($result)or die(odbc_error($connection));
}
function bulkInserttoDB($csvPath){
$tablename = "[DATABASE].[dbo].[TABLE]";
$insert = "BULK
INSERT ".$tablename."
FROM '".$csvPath."'
WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\\n')";
print_r($insert);
print_r("<br>");
$result = odbc_prepare($GLOBALS['connection'], $insert);
odbc_execute($result)or die(odbc_error($connection));
}
?>
I ended up using the script above to write to the database line by line... This was going to take hours. I modified to the script to use BULK INSERT which unfortunately we didn't have 'permissions' to use. Once I 'obtained' permissions, the BULK INSERT method worked a charm.
I am trying to pull data from the justin.tv API and store the echo I get in the below code in to a database or a file to be included in the sidebar of website. I am not sure on how to do this. The example of what I am trying to achieve is the live streamers list on the sidebar of teamliquid.net. Which I have done but doing it the way I have done it slows the site way down because it does about 50 json requests every time the page loads. I just need to get this in to a cached file that updates every 60 seconds or so. Any ideas?
<?php
$json_file = file_get_contents("http://api.justin.tv/api/stream/list.json?channel=colcatz");
$json_array = json_decode($json_file, true);
if ($json_array[0]['name'] == 'live_user_colcatz') echo 'coL.CatZ Live<br>';
$json_file = file_get_contents("http://api.justin.tv/api/stream/list.json?channel=coldrewbie");
$json_array = json_decode($json_file, true);
if ($json_array[0]['name'] == 'live_user_coldrewbie') echo 'coL.drewbie Live<br>';
?>
I'm not entirely sure how you would imagine this being cached, but the code below is an adaption of a block of code I've used in the past for some Twitter work. There are a few things that could probably be done better from a security perspective. Anyway, this gives you a generic way of grabbing the Feed, parsing through it, and then sending it to the database.
Warning: This assumes that there is a database connection already established within your own system.
(* Make sure you scroll to the bottom of the code window *)
/**
* Class SM
*
* Define a generic wrapper class with some system
* wide functionality. In this case we'll give it
* the ability to fetch a social media feed from
* another server for parsing and possibly caching.
*
*/
class SM {
private $api, $init, $url;
public function fetch_page_contents ($url) {
$init = curl_init();
try {
curl_setopt($init, CURLOPT_URL, $url);
curl_setopt($init, CURLOPT_HEADER, 0);
curl_setopt($init, CURLOPT_RETURNTRANSFER, 1);
} catch (Exception $e) {
error_log($e->getMessage());
}
$output = curl_exec($init);
curl_close($init);
return $output;
}
}
/**
* Class JustinTV
*
* Define a specific site wrapper for getting the
* timeline for a specific user from the JustinTV
* website. Optionally you can return the code as
* a JSON string or as a decoded PHP array with the
* $api_decode argument in the get_timeline function.
*
*/
class JustinTV extends SM {
private $timeline_document,
$api_user,
$api_format,
$api_url;
public function get_timeline ($api_user, $api_decode = 1, $api_format = 'json', $api_url = 'http://api.justin.tv/api/stream/list') {
$timeline_document = $api_url . '.' . $api_format . '?channel=' . $api_user;
$SM_init = new SM();
$decoded_json = json_decode($SM_init->fetch_page_contents($timeline_document));
// Make sure that our JSON is really JSON
if ($decoded_json === null && json_last_error() !== JSON_ERROR_NONE) {
error_log('Badly formed, dangerous, or altered JSON string detected. Exiting program.');
}
if ($api_decode == 1) {
return $decoded_json;
}
return $SM_init->fetch_page_contents($timeline_document);
}
}
/**
* Instantiation of the class
*
* Instantiate our JustinTV class, fetch a user timeline
* from JustinTV for the user colcatz. The loop through
* the results and enter each of the individual results
* into a database table called cache_sm_justintv.
*
*/
$SM_JustinTV = new JustinTV();
$user_timeline = $SM_JustinTV->get_timeline('colcatz');
foreach ($user_timeline AS $entry) {
// Here you could check whether the entry already exists in the system before you cache it, thus reducing duplicate ID's
$date = date('U');
$query = sprintf("INSERT INTO `cache_sm_justintv` (`id`, `cache_content`, `date`) VALUES (%d, '%s', )", $entry->id, $entry, $date);
$result = mysql_query($query);
// Do some other stuff and then close the MySQL Connection when your done
}
I have a bunch of html files in a site that were created in the year 2000 and have been maintained to this day. We've recently began an effort to replace illegal characters with their html entities. Going page to page looking for copyright symbols and trademark tags seems like quite a chore. Do any of you know of an app that will take a bunch of html files and tell me where I need to replace illegal characters with html entities?
You could write a PHP script (if you can; if not, I'd be happy to help), but I assume you already converted some of the "special characters", so that does make the task a little harder (although I still think it's possible)...
Any good text editor will do a file contents search for you and return a list of matches.
I do this with EditPlus. There are several editors like Notepad++, TextPad, etc that will easily help you do this.
You do not have to open the files. You just specify a path where the files are stored and the Mask (*.html) and the contents to search for "©" and the editor will come back with a list of matches and when you double click, it opens the file and brings up the matching line.
I also have a website that needs to regularly convert large numbers of file names back and forth between character sets. While a text editor can do this, a portable solution using 2 steps in php was preferrable. First, add the filenames to an array, then do the search and replace. An extra piece of code in the function excludes certain file types from the array.
Function listdir($start_dir='.') {
$nonFilesArray=array('index.php','index.html','help.html'); //unallowed files & subfolders
$filesArray = array() ; // $filesArray holds new records and $full[$j] holds names
if (is_dir($start_dir)) {
$fh = opendir($start_dir);
while (($tmpFile = readdir($fh)) !== false) { // get each filename without its path
if (strcmp($tmpFile, '.')==0 || strcmp($tmpFile, '..')==0) continue; // skip . & ..
$filepath = $start_dir . '/' . $tmpFile; // name the relative path/to/file
if (is_dir($filepath)) // if path/to/file is a folder, recurse into it
$filesArray = array_merge($filesArray, listdir($filepath));
else // add $filepath to the end of the array
$test=1 ; foreach ($nonFilesArray as $nonfile) {
if ($tmpFile == $nonfile) { $test=0 ; break ; } }
if ( is_dir($filepath) ) { $test=0 ; }
if ($test==1 && pathinfo($tmpFile, PATHINFO_EXTENSION)=='html') {
$filepath = substr_replace($filepath, '', 0, 17) ; // strip initial part of $filepath
$filesArray[] = $filepath ; }
}
closedir($fh);
} else { $filesArray = false; } # no such folder
return $filesArray ;
}
$filesArray = listdir($targetdir); // call the function for this directory
$numNewFiles = count($filesArray) ; // get number of records
for ($i=0; $i<$numNewFiles; $i++) { // read the filenames and replace unwanted characters
$tmplnk = $linkpath .$filesArray[$i] ;
$outname = basename($filesArray[$i],".html") ; $outname = str_replace('-', ' ', $outname);
}