Db Schema for custom file navigation with laravel and mysql - mysql

The problem
Hi, i want to have a web app on which the user could navigate through a system of directory and subdirectory and download / see the files. ( similar to a file manager but they have some important limitation for this project limitation )
I want to create a panel in which i can navigate ( through post request ) to the next folder or download the file and upload file.
There are two categories of user :
user A ( can view user A and B files )
user B ( can view only view user B files )
Possible solution
What if i create a table in such way
-- --- --
path -> string -> primary key ( this path is equals to the path in the file system )
isDirectory -> boolean
canViewByUserB -> boolean
-- --- --
The path field is unique so i cannot have two file with the same name in the same folder.
So, when the panel is create, i search with a post request for a path that doesn't containt " / ", ( maybe with a LIKE statement ?) so it's in the root folder.
If this query retrieve a:
directory, if the user click on it, i do a post request and search for a path that start with "nameOfThisDirectory/someString" but it must not contain onother / ( in this way i retrive the directory child )
a file, get request to download the file with the specify path
EDIT
To traverse the file system through the db, here the query
select * from file where path REGEXP '^([^/]*[/]){2}[^/]*$';
the '2' indicate two /
what do you think of this solution?
With this setup i can easily do some things, such as
create a total download number of a file
number of visualization on a file
easily navigate through the file system
apply policies on files ( user a and user b )
Thanks, waiting for advice

Related

DLT: commas treated as part of column name

I am trying to create a STREAMING LIVE TABLE object in my DataBricks environment, using an S3 bucket with a bunch of CSV files as a source.
The syntax I am using is:
CREATE OR REFRESH STREAMING LIVE TABLE t1
COMMENT "test table"
TBLPROPERTIES
(
"myCompanyPipeline.quality" = "bronze"
, 'delta.columnMapping.mode' = 'name'
, 'delta.minReaderVersion' = '2'
, 'delta.minWriterVersion' = '5'
)
AS
SELECT * FROM cloud_files
(
"/input/t1/"
,"csv"
,map
(
"cloudFiles.inferColumnTypes", "true"
, "delimiter", ","
, "header", "true"
)
)
A sample source file content:
ROW_TS,ROW_KEY,CLASS_ID,EVENT_ID,CREATED_BY,CREATED_ON,UPDATED_BY,UPDATED_ON
31/07/2018 02:29,4c1a985c-0f98-46a6-9703-dd5873febbbb,HFK,XP017,test-user,02/01/2017 23:03,,
17/01/2021 21:40,3be8187e-90de-4d6b-ac32-1001c184d363,HTE,XP083,test-user,02/09/2017 12:01,,
08/11/2019 17:21,05fa881e-6c8d-4242-9db4-9ba486c96fa0,JG8,XP083,test-user,18/05/2018 22:40,,
When I run the associated pipeline, I am getting the following error:
org.apache.spark.sql.AnalysisException: Cannot create a table having a column whose name contains commas in Hive metastore.
For some reason, the loader is not recognizing commas as column separators and is trying to load the whole thing into a single column.
I spent a good few hours already trying to find a solution. Replacing commas with semicolons (both in the source file and in the "delimiter" option) does not help.
Trying to manually upload the same file to a regular (i.e. non-streaming) Databricks table works just fine. The issue is solely with a streaming table.
Ideas?
Not exactly the type of a solution I would have expected here but it seems to work so...
Rather than using SQL to create a DLT, using Python scripting helps:
import dlt
#dlt.table
def t1():
return (
spark.readStream.format("cloudFiles")
.option("cloudFiles.format", "csv")
.load("/input/t1/")
)
Note that the above script needs to be executed via a DLT pipeline (running it directly from a notebook will throw a ModuleNotFoundError exception)

Verify the existence of files in a Microsoft Access database

I have an access database that has a column full of file names.. Some of these files have been moved or deleted. I basically need to verify the existence of each file (10,000+ files).. Basically I need to:
Loop through the table column (MyFilesNames) & check if the file exists..
If the file exists then move the file to a new location (Z:\MyFiles\myfilename.pdf) & update the file name to the new location (Z:\MyFiles\myfilename.pdf)..
NOTE: The file names have many different extensions (.pdf, .jpg, .gif, .docx, .xlsx etc..)
THIS IS MY CURRENT TABLE & FILES
-FILE NAMES STORED IN THE DATABASE:
MyFilesNames:
Z:\Temp\1.pdf
Z:\Temp\2.jpg
Z:\Temp\3.gif
Z:\Temp\4.pdf
Z:\Temp\6.pdf
-ACTUAL FILES STORED ON THE COMPUTER:
Z:\Temp\1.pdf
Z:\Temp\2.jpg
Z:\Temp\3.gif
Z:\Temp\4.pdf
Z:\Temp\5.pdf
THIS IS WHAT I AM TRYING TO ACHIEVE
-FILE NAMES STORED IN THE DATABASE:
MyFilesNames:
Z:\MyFiles\1.pdf
Z:\MyFiles\2.jpg
Z:\MyFiles\3.gif
Z:\MyFiles\4.pdf
Z:\Temp\6.pdf
-ACTUAL FILES STORED ON THE COMPUTER:
Z:\MyFiles\1.pdf
Z:\MyFiles\2.pdf
Z:\MyFiles\3.pdf
Z:\MyFiles\4.pdf
Z:\Temp\5.pdf
Is anyone able to help me achieve this using Access 2007?

Proftpd specific user configuration from MySQL

I have already set up a proftpd server with a MySQL connection.
Everything works fine.
I would like to set specific permissions for each user from the database using (PathAllowFilter, PathDenyFilter, ...)
The server running on Ubuntu 12.04 LTS distribution.
It is not so easy, there is no single module to do this. But I found a solution for this.
It's not optimal because you have to restart ProFTPd server each time you change MySQL configuration, but it works.
As you have a ProFTPd server that already run with MySQL, i will explain only the part of specific user configuration.
For this solution you need ProFTPd to be compiled with these modules:
mod_ifsession (with this module you will be able to configure <IfUser> conditions)
mod_conf_sql (with this module you will be able to load configuration from MySQL)
To help you with ProFTPd recompilation, you can run this command proftpd -V to see how your version is configured. You can found some documentation here.
Once you have compiled your ProFTPd server and it's run, you will have to log on your MySQL server.
If you read mod_conf_sql, they say to create 3 tables ftpctxt, ftpconf, ftpmap. We will not create these tables unless you want to have global configuration from MySQL.
We will fake the MySQL configuration with "views".
1. First you add each specific configuration as user column (make sure to have a default value):
ALTER TABLE ftpuser #
ADD PathDenyFilter VARCHAR( 255 ) NOT NULL DEFAULT '(\.ftp)|(\.hta)[a-z]+$';`
ALTER TABLE ftpuser
ADD PathAllowFilter VARCHAR( 255 ) NOT NULL DEFAULT '.*$';`
....
2. Create the conf view:
User's id and configuration column are concatenated to make an unique id
User's configuration column is used as type
User's configuration value is used as info
View is an union of selects (for every column an union is required)
CREATE VIEW ftpuser_conf AS SELECT concat(ftpuser.id,'-PathDenyFilter')
AS id,'PathDenyFilter' AS type,ftpuser.PathDenyFilter AS info from ftpuser
UNION
SELECT concat(ftpuser.id,'-PathAllowFilter')
AS id,'PathAllowFilter' AS type, ftpuser.PathAllowFilter AS info
from ftpuser;
3. Create the ctxt view
This view is a concatenation of a "Default" row and user's rows ("Default" row has 1 as id and user's rows have user's id + 1 as id.
Concatenate "userconf-" and user's id as name
"IfUser" as type
User's username as info
CREATE VIEW ftpuser_ctxt AS
SELECT 1 AS id,NULL AS parent_id, 'default' AS name, 'default' AS type, NULL AS info
UNION
SELECT (ftpuser.id + 1) AS id,1 AS parent_id,
concat('userconf-',ftpuser.userid) AS name,
'IfUser' AS type,ftpuser.userid AS info
FRON ftpuser;
4. Create the map view
User's id and configuration column are concatenated for conf_id
User's id + 1 for ctxt_id
View is an union of selects (for every column an union is required)
CREATE VIEW ftpuser_map
AS SELECT concat(ftpuser.id,'-PathDenyFilter')
AS conf_id,(ftpuser.id + 1) AS ctxt_id
from ftpuser
union
select concat(ftpuser.id,'-PathAllowFilter')
AS conf_id,(ftpuser.id + 1) AS ctxt_id
from ftpuser;
5. Add these lines to your ProFTPd configuration
<IfModule mod_conf_sql.c>
Include sql://user:password#host/db:database/ctxt:ftpuser_ctxt:id,parent_id,type,info/conf:ftpuser_conf:id,type,info/map:ftpuser_map:conf_id,ctxt_id/base_id=1
</IfModule>
Where:
user => your MySQL username
password => your MySQL password
host => your MySQL host
database => your MySQL database
6. Restart your ProFTPd server
I hope this will help you. Good luck

Codeigniter upgradable module logic for database process

i am trying to build my own cms with using Codeigniter.
I wrote some modules already. But, in time i made some changes with them.
Now, if i want to upgrade a module. I send the files with ftp and change database fields with phpmyadmin.
It takes a lot time and high possibility to mis something to change, and for every project i've use this module, i have to repeat these changes again.
Now, I am planning to make an installation system.
my Modules directory structure like below:
/modules
/modules/survey/
/modules/survey/config
/modules/survey/config/autoload.php
/modules/survey/config/config.php
/modules/survey/config/routes.php
/modules/survey/config/install.php
/modules/survey/controllers
/modules/survey/controllers/entry.php...
/modules/survey/models
/modules/survey/models/survey.php...
/modules/survey/views
/modules/survey/views/index.php...
I thought that all modules should have an install.php file in config directory. That is keeping the setting of releated module. Like below:
$config['version'] = 1.1; //or 1.2, 1.3 etc.
$config['module'] = 'Survey Module';
$config['module_slug'] = 'survey';
$config['module_db_table'] = 'module_survey';
I have an installed_modules table already:
id, module, module_slug, version
Now, i am trying to make an installation script. Like below:
Before start , I zip module's files.
1- upload zip file with an installation page to a temp directory
2- unzip the module in this temp direcorty
3- Find install.php
4- Get modules information from install.php
5- Check if this module already in installed_modules table.
6a) If it's not: I will make a new module_survey table. And copy this temp directory into the real modules directory.
6b) If it's already : I have to change the structure of this table without lossing the data added before. Delete all module files and copy the new ones from temp into the modules directory.
7- When everything done, Delete temp directory.
I stucked in 6a and 6b.
For 6a, How should i create e.x 'module_survey' table.
Should i add a $config['db_query'] in install.php like
$config['db_query'] = "CREATE TABLE IF NOT EXISTS `module_survey` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(50) DEFAULT NULL,
`lang_id` int(11) NOT NULL DEFAULT '1',
`usort` int(3) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
";
and run this query. Or what is your advice here? There maybe not just one table, there should 2 or more with relations each other for different modules.
and For 6b:
I thought, i should create a new temp table like named "temp_module_survey".
old fields =
$oldFields = $this->db->field_data('module_survey');
for new fields =
$newFields = $this->db->field_data('temp_module_survey');
compare fields which are newly added, which are deleted and which's fieldData has changed.
And
add new fields to oldTable
Delete unnecessary fields from oldTable
and update fields which's fieldData has changed.
Then remove temporary Table.
For a summary, What should i do for database changes without lossing the old data.
I hope i could explain.
Thank you.
Phil Sturgeon's codeigniter-migrations can help.

How do I customise the CREATE DATABASE statement in VSTS DB Edition Deploy?

I'm using VSTS Database Edition GDR Version 9.1.31024.02
I've got a project where we will be creating multiple databases with identical schema, on the fly, as customers are added to the system. It's one DB per customer. I thought I should be able to use the deploy script to do this. Unfortunately I always get the full filenames specified on the CREATE DATABASE statement. For example:
CREATE DATABASE [$(DatabaseName)]
ON
PRIMARY(NAME = [targetDBName], FILENAME = N'$(DefaultDataPath)targetDBName.mdf')
LOG ON (NAME = [targetDBName_log], FILENAME = N'$(DefaultDataPath)targetDBName_log.ldf')
GO
I'd expected something more like this
CREATE DATABASE [$(DatabaseName)]
ON
PRIMARY(NAME = [targetDBName], FILENAME = N'$(DefaultDataPath)$(DatabaseName).mdf')
LOG ON (NAME = [targetDBName_log], FILENAME = N'$(DefaultDataPath)$(DatabaseName)_log.ldf')
GO
Or even
CREATE DATABASE [$(DatabaseName)]
I'm not going to be running this on an on-going basis so I'd like to make it as simple as possible, for the next guy. There are a bunch of options for deployment in the project properties, but I can't get this to work the way I'd like.
Any one know how to set this up?
Better late than never, I know how to get the $(DefaultDataPath)$(DatabaseName) file names from your second example.
The SQL you're showing in your first code snippet suggests that you don't have scripts for creating the database files in your VSTS:DB project, perhaps by deliberately excluded them from any schema comparisons you've done. I found it a little counter-intuitive, but the solution is to let VSTS:DB script the MDF and LDF in you development environment, then edit those scripts to use the SQLCMD variables.
In your database project, go to the folder Schema Objects > Database Level Objects > Storage > Files. In there, add these two files:
Database.sqlfile.sql
ALTER DATABASE [$(DatabaseName)]
ADD FILE (NAME = [$(DatabaseName)],
FILENAME = '$(DefaultDataPath)$(DatabaseName).mdf',
SIZE = 2304 KB, MAXSIZE = UNLIMITED, FILEGROWTH = 1024 KB)
TO FILEGROUP [PRIMARY];
Database_log.sqlfile.sql
ALTER DATABASE [$(DatabaseName)]
ADD LOG FILE (NAME = [$(DatabaseName)_log],
FILENAME = '$(DefaultDataPath)$(DatabaseName)_log.ldf',
SIZE = 1024 KB, MAXSIZE = 2097152 MB, FILEGROWTH = 10 %);
The full database creation script that VSTS:DB, or for that matter VSDBCMD.exe, generates will now use the SQLCMD variables for naming the MDF and LDF files, allowing you to specify them on the command line, or in MSBuild.
We do this using a template database, that we back up, copy, and restore as new customers are brought online. We don't do any of the schema creation with scripts but with a live, empty DB.
Hmm, well it seems that the best answer so far (given the over whelming response) is to edit the file after the fact... Still looking