I have an access database that has a column full of file names.. Some of these files have been moved or deleted. I basically need to verify the existence of each file (10,000+ files).. Basically I need to:
Loop through the table column (MyFilesNames) & check if the file exists..
If the file exists then move the file to a new location (Z:\MyFiles\myfilename.pdf) & update the file name to the new location (Z:\MyFiles\myfilename.pdf)..
NOTE: The file names have many different extensions (.pdf, .jpg, .gif, .docx, .xlsx etc..)
THIS IS MY CURRENT TABLE & FILES
-FILE NAMES STORED IN THE DATABASE:
MyFilesNames:
Z:\Temp\1.pdf
Z:\Temp\2.jpg
Z:\Temp\3.gif
Z:\Temp\4.pdf
Z:\Temp\6.pdf
-ACTUAL FILES STORED ON THE COMPUTER:
Z:\Temp\1.pdf
Z:\Temp\2.jpg
Z:\Temp\3.gif
Z:\Temp\4.pdf
Z:\Temp\5.pdf
THIS IS WHAT I AM TRYING TO ACHIEVE
-FILE NAMES STORED IN THE DATABASE:
MyFilesNames:
Z:\MyFiles\1.pdf
Z:\MyFiles\2.jpg
Z:\MyFiles\3.gif
Z:\MyFiles\4.pdf
Z:\Temp\6.pdf
-ACTUAL FILES STORED ON THE COMPUTER:
Z:\MyFiles\1.pdf
Z:\MyFiles\2.pdf
Z:\MyFiles\3.pdf
Z:\MyFiles\4.pdf
Z:\Temp\5.pdf
Is anyone able to help me achieve this using Access 2007?
Related
I have a stored procedure that is triggered when a specific file is dropped in an aft location. The file can come in 2 formats "blahfile170001" and "BLAHFILE170001" and I need to ensure that both files trigger and are picked up.
The original code just had "blahfile170001" and we are now receiving it in both formats. I updated it below to include the capital version with ; separating as well as using quotes but to no avail. Newer to sql so not sure what I am missing here. This is for SQL Server 2014 and this is stored as xml data.
<Trigger Type="File">
<FileLocation>\\server\share\dummyfiles\Test\AFT\</FileLocation>
<FileMask>blahfile170001;BLAHFILE170001</FileMask>
<Schedule Type="DaysWeek" Days="Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday" />
<WindowStart>00:05</WindowStart>
<WindowEnd>23:55</WindowEnd>
</Trigger>
What's wrong with this code, I want to insert an image to table, but when I was executed this code the result of image field is NULL.
I try with MySQL Workbench executing:
CREATE TABLE image(keyh int, img blob);
INSERT INTO image VALUES(1, load_file('d:\Picture\cppLogo.png'));
To use this function, the file must be located on the server host, you
must specify the full path name to the file, and you must have the
FILE privilege. The file must be readable by all and its size less
than max_allowed_packet bytes. If the secure_file_priv system variable
is set to a nonempty directory name, the file to be loaded must be
located in that directory.
If the file does not exist or cannot be read because one of the
preceding conditions is not satisfied, the function returns NULL.
http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_load-file
What can you do?
Check which user mysql is running with, and make sure the file is readable by that user. Make sure the security settings allow the file to be read and it is not of greater size than max_allowed_packet.
See SHOW VARIABLES LIKE 'max_allowed_packet'.
For me, it looks like the file is on your localhost and you try to upload it. This is not possible using LOAD_FILE(). The file must be already on the server.
The issue can also be caused by your windows directory seperator character \ (like RiggsFolly said), which is used for escaping instead, switch to unix style / then:
LOAD_FILE('D:/Picture/cppLogo.png')
Or your Image is of greater filesize than a BLOB field can hold, like Balazs Vago said.
i was found the correct syntax is following this:
C:/wamp/binsql5.5.20/data/56VRLRFE.jpg'
not this
C:\wamp\binsql5.5.20\data\56VRLRFE.jpg'
thanks guys for all your Answer :D
Open your MySql Command Line Client and login with root user and type
mysql> SHOW VARIABLES LIKE "secure_file_priv";
this will show you the secure path used by MySql to access the files. something like
+------------------+-----------------------+
| Variable_name | Value |
+------------------+-----------------------+
| secure_file_priv | /var/lib/mysql-files/ |
+------------------+-----------------------+
you can either paste files inside this folder or change the "secure_file_priv" variable value to "empty string" so that it can read file from anywhere.
On Windows the fundamental problem is that MySql, by default, runs as a Windows service under the Network account which means that there are only a few file locations the server can access. Thus for load_file to work, the file must be placed in a folder on the server which can be read by the service. There seems to be no documentation on this. In my investigation the only folder that works with load_file is C:\ProgramData\MySQL\MySQL Server 8.0\Uploads
Run a query to test the load...
select load_file('C:\\ProgramData\\MySQL\\MySQL Server 8.0\\Uploads\\1.txt') ;
Note on windows you have to use either double \ or / to separate the path elements. This will return NULL on failure, otherwise the contents of the file.
Assume now a table named db.image with columns source and image. Source is character and image is blob. The command to load a.jpg into the table would be
insert into db.image (source,image) values ('a.jpg',load_file('c:/programdata/mysql/mysql server 8.0/uploads/a.jpg'));
Store Directly Without folder name for example-
create table myimg(id int, image mediumblob);
insert into myimg values(101, load_file("E://xyz.png"));
This question already has an answer here:
How to create table in mdb from dbf query
(1 answer)
Closed 6 years ago.
I have a Shapefile with 80.000 polygons that they are grouped by a specific field called "OTA".
I wanted to convert each Shapefile (it's attribute table) to mdb database (not Personal Geodatabase) with one table in it with the same name as the Shapefile and with a given field structure.
In the code I used I had to load on Python 2 new modules:
pypyodbc and adodbapi
The first module was used to create the mdb file for each shapefile and the second to create the table in the mdb and fill the table with the data from the attribute table of the shapefile.
The code I came up with is the following:
import pypyodbc
import adodbapi
Folder = ur'C:\TestPO' # Folder to save the mdbs
FD = Folder+ur'\27ALLPO.shp' # Shapefile
Map = u'PO' # Map type
N = u'27' # Prefecture
OTAList = sorted(set([row[0] for row in arcpy.da.SearchCursor(FD,('OTA'))]))
cnt = 0
for OTAvalue in OTAList:
cnt += 1
dbname = N+OTAvalue+Map
pypyodbc.win_create_mdb(Folder+'\\'+dbname+'.mdb')
conn_str = (r"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="+Folder+"\\"+dbname+ur".mdb;")
conn = adodbapi.connect(conn_str)
crsr = conn.cursor()
SQL = "CREATE TABLE ["+dbname+"] ([FID] INT,[AREA] FLOAT,[PERIMETER] FLOAT,[KA_PO] VARCHAR(10),[NOMOS] VARCHAR(2),[OTA] VARCHAR(3),[KATHGORPO] VARCHAR(2),[KATHGORAL1] VARCHAR(2),[KATHGORAL2] VARCHAR(2),[LABEL_PO] VARCHAR(8),[PHOTO_45] VARCHAR(14),[PHOTO_60] VARCHAR(10),[PHOTO_PO] VARCHAR(8),[POLY_X_CO] DECIMAL(10,3),[POLY_Y_CO] DECIMAL(10,3),[PINAKOKXE] VARCHAR(11),[LANDTYPE] DECIMAL(2,0));"
crsr.execute(SQL)
conn.commit()
with arcpy.da.SearchCursor(FD,['FID','AREA','PERIMETER','KA_PO','NOMOS','OTA','KATHGORPO','KATHGORAL1','KATHGORAL2','LABEL_PO','PHOTO_45','PHOTO_60','PHOTO_PO','POLY_X_CO','POLY_Y_CO','PINAKOKXE','LANDTYPE'],'"OTA" = \'{}\''.format(OTAvalue)) as cur:
for row in cur:
crsr.execute("INSERT INTO "+dbname+" VALUES ("+str(row[0])+","+str(row[1])+","+str(row[2])+",'"+row[3]+"','"+row[4]+"','"+row[5]+"','"+row[6]+"','"+row[7]+"','"+row[8]+"','"+row[9]+"','"+row[10]+"','"+row[11]+"','"+row[12]+"',"+str(row[13])+","+str(row[14])+",'"+row[15]+"',"+str(row[16])+");")
conn.commit()
crsr.close()
conn.close()
print (u'«'+OTAvalue+u'» ('+str(cnt)+u'/'+str(len(OTAList))+u')')
Executing this code took about 5 minutes to complete the task for about 140 mdbs.
As you can see, I execute an "INSERT INTO" statement for each record of the shapefile.
Is this the correct way (and probably the fastest) or should I collect all the statements for each "OTA" and execute them all together?
I don't think anyone's going to write your code for you, but if you try some VBA yourself, and tell us what happened and what worked and what you're stuck on, you'll get a great response.
Saying that - to start with I don't see any reason to use VB6 when you can use VBA right inside your mdb file.
Use DIR command and possibly FileSystemObject to loop through all DBFs in a given folder, or use FileDialog object to select multiple files at one go
Then process each file with
DoCmd.TransferDatabase command
TransferType:=acImport, _
DatabaseType:="dBASE III", _
DatabaseName:="your-dbf-filepath", _
ObjectType:=acTable, _
Source:="Source", _
Destination:="your-newtbldbf"
Finally process each dbf import with a make table query
Look at results and see what might have to be changed based on field types before and after.
Then .... edit your post and let us know how it went
In theory you could do something like this by searching the directory the DBF files reside in, writing those filenames to a table, then loop through the table and, for each filename, scan the DBF for tables and their fieldnames/datatypes and create those tables in your MDB. You could also bring in all the data from the tables, all within a series of loops.
In theory, you could.
In practice, you can't. And you can't, because DBF and MDB support different data types that aren't compatible.
I suppose you could create a "crosswalk" table such that for each datatype in DBF there is a corresponding, hand-picked datatype in MDB and use that when you're creating the table, but it's probably going to either fail to import some of the data or import corrupted data. And that's assuming you can open a DBF for reading the same way you can open an MDB for reading. Can you run OpenDatabase on a DBF from inside Access? I don't even have the answer to that.
I wouldn't recommend that you do this. The reason that you're doing it is because you want to keep the structure as similar as possible when migrating from dBase/FoxBase to Access. However, the file structure is different between them.
As you are aware, each .DBF ("Database file") file is a table, and the folder or directory in which the .DBF files reside constitutes the "database". With Access, all the tables in one database are in one .MDB ("Microsoft Database") file.
If you try to put each .DBF file in a separate .MDB file, you will have no end of trouble getting the .MDB files to interact. Access treats different .MDB files as different databases, not different tables in the same database, and you will have to do strange things like link all the separate databases just to have basic relational functionality. (I tried this about 25 years ago with Paradox files, which are also a one-file-per-table structure. It didn't take me long to decide it was easier to get used to the one-file-per-database concept.) Do yourself a favor, and migrate all of the .DBF files in one folder into a single .MDB file.
As for what you ought to do with your code, I'd first suggest that you use ADO rather than DAO. But if you want to stick with DAO because you've been using it, then you need to have one connection to the dBase file and another to the Access database. As far as I can tell, you don't have the dBase connection. I've never tried what you're doing before, but I doubt you can use a SQL statement to select directly from a .dbf file in the way you're doing. (I could be wrong, though; Microsoft has come up with stranger things over the years.)
It would
user inside the plugable DB(PDB21) username -'JEWBDEV' and
PWD-abc123, i am not able to export backup .dmp file please suggest
steps.
Create datapump directory and grant permissions to user
SQL> CREATE or REPLACE DIRECTORY dpump_dir as '/home/user1/dumpfiles';
SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO JEWBDEV;
Set the environment (outside sqlplus):
setenv DATA_PUMP_DIR DPUMP_DIR
Export table (outside sqlplus):
expdp JEWBDEV/abc123#pdb21 tables=TABLE1 dumpfile=table1.dmp logfile=jewbdev_exp.log
Note, in above examples some of the values like TABLE1, /home/user1/dumpfiles are written for example, it needs to be changed according to your requirement.
Export everything under a schema/user (outside sqlplus):
To export everything under a user you can skip giving the tablenames, following is the format:
expdp system/manager#<pluggable_database> file=<user>.dmp owner=<user>
Example:
expdp system/manager#pdb21 file=JEWBDEV.dmp owner=JEWBDEV
Also, check this SO question to get better understanding on expdp.
Working example
Following steps are verified at my end:
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 CDB1_PDB1 READ WRITE NO
SQL> alter session set container=cdb1_pdb1;
Session altered.
SQL> CREATE or REPLACE DIRECTORY dpump_dir as '$T_WORK';
Directory created.
Following should be run outside sqlplus:
expdp system/manager#cdb1_pdb1 file=scott.dmp owner=SCOTT
In your case it will be:
expdp system/manager#cdb1_pdb21 file=JEWBDEV.dmp owner=JEWBDEV
So, make sure your pdb name is CDB1_PDB21 when you query show pdbs. If the PDB name is PDB21 then the connect string should change like the following:
expdp system/manager#pdb21 file=JEWBDEV.dmp owner=JEWBDEV
Also, note that we are exporting the user/schema JEWBDEV with SYSTEM user.
I'm using VSTS Database Edition GDR Version 9.1.31024.02
I've got a project where we will be creating multiple databases with identical schema, on the fly, as customers are added to the system. It's one DB per customer. I thought I should be able to use the deploy script to do this. Unfortunately I always get the full filenames specified on the CREATE DATABASE statement. For example:
CREATE DATABASE [$(DatabaseName)]
ON
PRIMARY(NAME = [targetDBName], FILENAME = N'$(DefaultDataPath)targetDBName.mdf')
LOG ON (NAME = [targetDBName_log], FILENAME = N'$(DefaultDataPath)targetDBName_log.ldf')
GO
I'd expected something more like this
CREATE DATABASE [$(DatabaseName)]
ON
PRIMARY(NAME = [targetDBName], FILENAME = N'$(DefaultDataPath)$(DatabaseName).mdf')
LOG ON (NAME = [targetDBName_log], FILENAME = N'$(DefaultDataPath)$(DatabaseName)_log.ldf')
GO
Or even
CREATE DATABASE [$(DatabaseName)]
I'm not going to be running this on an on-going basis so I'd like to make it as simple as possible, for the next guy. There are a bunch of options for deployment in the project properties, but I can't get this to work the way I'd like.
Any one know how to set this up?
Better late than never, I know how to get the $(DefaultDataPath)$(DatabaseName) file names from your second example.
The SQL you're showing in your first code snippet suggests that you don't have scripts for creating the database files in your VSTS:DB project, perhaps by deliberately excluded them from any schema comparisons you've done. I found it a little counter-intuitive, but the solution is to let VSTS:DB script the MDF and LDF in you development environment, then edit those scripts to use the SQLCMD variables.
In your database project, go to the folder Schema Objects > Database Level Objects > Storage > Files. In there, add these two files:
Database.sqlfile.sql
ALTER DATABASE [$(DatabaseName)]
ADD FILE (NAME = [$(DatabaseName)],
FILENAME = '$(DefaultDataPath)$(DatabaseName).mdf',
SIZE = 2304 KB, MAXSIZE = UNLIMITED, FILEGROWTH = 1024 KB)
TO FILEGROUP [PRIMARY];
Database_log.sqlfile.sql
ALTER DATABASE [$(DatabaseName)]
ADD LOG FILE (NAME = [$(DatabaseName)_log],
FILENAME = '$(DefaultDataPath)$(DatabaseName)_log.ldf',
SIZE = 1024 KB, MAXSIZE = 2097152 MB, FILEGROWTH = 10 %);
The full database creation script that VSTS:DB, or for that matter VSDBCMD.exe, generates will now use the SQLCMD variables for naming the MDF and LDF files, allowing you to specify them on the command line, or in MSBuild.
We do this using a template database, that we back up, copy, and restore as new customers are brought online. We don't do any of the schema creation with scripts but with a live, empty DB.
Hmm, well it seems that the best answer so far (given the over whelming response) is to edit the file after the fact... Still looking