I want to import a XMl file into phpmyadmin.
I have a XML file with 75000 lines so inserting individually the values is a huge no, when I try to import the file as XMl to the database it gave the following message:
Import has been successfully finished, 0 queries executed.
The following structures have either been created or altered. Here you can:
View a structure's contents by clicking on its name.
Change any of its settings by clicking the corresponding "Options" link.
Edit structure by following the "Structure" link.
wp_db (Options)
(xmlfile.xml)
-----------------------------------------------------------------------
Notice in ./libraries/plugins/import/ImportXml.php#158
Undefined index: pma
Backtrace
./import.php#652: PMA\libraries\plugins\import\ImportXml->doImport(array)
The phpmyadmin is behind MAMP, a short version of the xml file looks like this:
<Report>
<Row1>
<Reference>123</Reference>
<Nature>Outside</Nature>
<Disponibility>Full</Disponibility>
<State>In progress</State>
<Person>Jon</Person>
<Seller></Seller>
</Row1>
<Row2>
<Reference>123</Reference>
<Nature>Outside</Nature>
<Disponibility>Full</Disponibility>
<State>In progress</State>
<Person>Jon</Person>
<Seller></Seller>
</Row2>
</Report>
If it's not possible at least what is the easiest way to do it?
Thanks!
phpMyAdmin supports importing XML only if it follows a special format. You can see the exact format by exporting in phpMyAdmin a table in XML. You would need to programmatically modify your existing file to adapt to the supported format.
Related
I'm importing a .TSV file, with the first row being the variable name and the first column as IDs, into SPSS using a syntax but I keep getting a Failure opening file error in my output. This is my code so far:
GET DATA
/TYPE=TXT
/FILE=\filelocation\filename.tsv
/DELCASE=LINE
/DELIMTERS="/t"
/QUALIFIER=''
/ARRANGEMENT=DELIMITED
/FIRSTCASE=2
/IMPORTCASE=ALL
/VARIABLES=
/MAP
RESTORE.
CACHE.
EXECUTE.
SAVE OUTFILE = "newfile.sav"
I think I'm having an issue in the delimters or qualifiers subcommand. Wondering if I should also include the variables under the variables subcommand. Any advice would be helpful. Thanks!
The GET DATA command you cite above has an empty /VARIABLES= subcommand.
If you used the "File -> Import Data -> Text Data' wizard, it would have populated this subcommand for you. If you are writing the GET DATA command syntax yourself, then you'd have to supply that list of field names yourself.
I try to import a XML file into a MYSQL table using the LOAD XML function:
LOAD XML INFILE 'test.xml INTO TABLE edge_delete ROWS IDENTIFIED BY '<edge>';
the XML File is structured like this:
<netstate xmlns:xsi=....>
<timestep time="2">
<edge id="10">
<lane id="10_0">
<vehicle id="veh1" pos="4.60" speed="0.00"/>
</lane>
</edge>
</timestep>
The Problem:
All node levels start with the attribute "id". The import does not distinguish between the node levels.
Each node level should be one corresponding column in my sql table: edge | lane | vehicle id |...
Thank you for you help
A solution which worked for me:
Use a python script which replaces the "id" attributes with another attribute string (e.g. with "edge"). The script replaces the string. Maybe not the best and efficient way, but solves the problem.
Now I can import all columns of the XML file into the mySQL table.
Another workaround: Using a text editor with a find/replace function.
Did not work properly with the tools I found for large xml files > 1GB.
The script can be found here:
https://studiofreya.com/2016/11/17/replace-string-in-xml-file-with-python/#comment-86628
My schema.ini file is being ignored.I get the same results whether I have a scheme.ini file in the same folder as my tab file or not. All of the columns end up in a single column. I am trying to use a schema.ini as I am importing tab delimited files. The results make perfect sense if it is trying to import a comma delim file.
So my postulate is that the schema.ini file is just being ignored.
I am running Access from a .Net program using Microsoft Access 14.0 Object.Library.
I am using this command from .net:
Access.DoCmd.TransferText( Microsoft.Office.Interop.Access.AcTextTransferType.acImportDelim, , TableName, TabFile, HasFieldNames)
Here is my schema.ini file, not that it matters since it is being completely ignored:
[impacts.txt]
Format=TabDelimited
ColNameHeader=True
MaxScanRows=0
Clues? Thanks!
EDIT:
I tried running this from within an Access Module with the same results.
I tried editing the registry to change the Format value there. Same results.
Consider an action query, either append or make-table, as the use of schema.ini files can work directly in an Access query of a text file. Below assumes .ini file is in same directory as text file.
INSERT INTO mytableName
SELECT * FROM [text;Database=C:\Path\To\Text\File].[impacts.txt]
SELECT * INTO newtableName FROM [text;Database=C:\Path\To\Text\File].[impacts.txt]
I'm using MySQL Workbench 6.3 and got 2 Tables included. My task here is to upload XML file from a web server to my Table. One of them looks like this:
-<candidates>
<publicationDate>2016-04-26 15:00:00</publicationDate>
-<candidate>
<name>John</name>
<party>Party1</party>
</candidate>
-<candidate>
<name>Tom</name>
<party>Party2</party>
</candidate>
</candidates>
.....
and goes like this with 13 records.At the very beginning I got a notification that given XML file consists no information about styles included. Also, the other information says that default format of returning data is JSON, so to get XML in the header Accept must be included with value application/xml.
It's my very beginning with MySQL and I will be very thankful for any advises.
I need to export varbinary data to file. But, when I do it using Column Transformations in SSIS, the exported files are corrupt. There are few junk characters at the start of the file. On removing them, the file opens fine.
A similar post for BCP, says that these characters specify the data length.
Would like to know how to address this issue in SSIS?
Thanks
Export transformation is used for converting the varbinary to files.I have tried something similar using Adventure works which has image type of var-binary data.
Following Query is used for the Source query. I have Modified the query
since it does not have the full path to write image files.
SELECT [ProductPhotoID]
,[ThumbNailPhoto]
,'D:\SSISTesting\ThumnailPhotos\'+[ThumbnailPhotoFileName]
,[LargePhoto]
,'D:\SSISTesting\LargePhotos\'+[LargePhotoFileName]
,[ModifiedDate]
FROM [Production].[ProductPhoto]
Used the Export column transformation[also available in 2005 and
2008] and configured as follows.
Mapped rest of the columns to the destination.
After running package all the image files are written into the
respective folders[D:\SSISTesting\ThumnailPhotos\ and D:\SSISTesting\LargePhotos].
Hope this helps!