I am using MS Access and have to create a file from a query, some of the fields are quite large and the program I am importing into only allows a max length of 65 characters. How do I insert line feed into those fields. The information is already in access so I cant do it by data validation on the fields.
Try this:
SELECT Left(LongLine, 65) & Chr(13) + Chr(10) & Right(LongLine, Len(LongLine) - 65) As _Line
FROM YourTable
Related
I am interested in how to insert for example 10 GB .csv file into MySQL database. I was using pandas and pyspark to read csv file, then adding csv header to list (if csv file does not have header, I was adding it with spark). Then I was parsing list and replacing characters for MySQL insert code -->
mydb.cursor().execute("CREATE TABLE " + table_name + " (id INT NOT NULL AUTO_INCREMENT," + column_names + ", PRIMARY KEY (id))")
Then I was adding whole rows without header line to list , then parsing again to replace 'name' to `name` . So I was editing list for
query = "insert into `"+ table_name +"` (" + column_names + ") values (" + row_value + ")"
This was working perfectly for small sized csv files. But for large files the process is crashing because of low memory.
But what about large csv files ? Is there any workaround for inserting large csv file data into MySQL ? Or do you have code examples for working with large csv files to insert them into MySQL without low memory issue ?
I thought that if I split large csv file into small ones and insert then, it would be better for memory. And may be there is some more better ways for inserting that size of data into MySQL .
Thank you.
If you do want to use Python and Pandas rather than the faster way of having MySQL read the csv file directly, then use the Pandas syntax for writing to sql. If you can load the csv file into memory, it ought to be fine for loading into MySQL as well. Here's a toy example of code for writing to MySQL, there are many options that you may want, so see https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html for details.
df.to_sql(name=table, con=engine, if_exists='append', index=False, chunksize=10000)
There may be a few issues with what you're doing, in any case what I offer will be the same
1st of all, if you're trying to process single row inserts, perhaps the SQL server can't pick on the velocity of requests, and crashes, then maybe a timer set foe delay would help
2nd of all, a single INSERT command can handle multiple lines, so instead of pushing one row at a time to a table, you can do puah them in the paces of thousands
I would suggeat somwthing like the following:
import time
rows = pd.DataFrame() #your df to be inserted
insert_header = "INSERT INTO table_name "
insert_cols = "(" + ",".join(rows.columns) + ")"
to_sql = rows[rows.columns[0:]].apply(
lambda x: "('" + "','".join(x.fillna(0).astype(str)) + "'),",
axis=1)
counter = to_sql.size
jump = 1000
for i in range(0, counter,jump): #jump
print(jump)
to = min(i + jump, counter)
sql_values = " VALUES " + ''.join(to_sql.iloc[i:to])
exec_sql = insert_header + insert_cols + sql_values
exec_sql = exec_sql[:-1]
cursor.execute(exec_sql)
connection.commit()
time.sleep(2)
there is a requirement for all the values integrating from SQL Server into a flat file (.csv) being inserted between a double quotation mark, such as 123 to be inserted as "123".
I am having such difficulty with this, i tried the derived columns with the script "\"\"" + [columnName] + "\"\"" but does not work at all.
Please be advised i need the column headers to have the same "" as well.
Many thanks!
If you mean you want to export data from SQL Server into a csv file using SSIS, and that you want the values to be double quoted, you just need to set the Text Qualifier property of your Destination connection to a double quote " character.
Screenshot with error message
I am trying to import a csv file containing 22000 rows into a mysql database. But after some 2000 records its showing the error "Invalid column count in CSV input on line 2369."
It might not be because of the line that you had shown.
It might be because some of the names in some other lines might have a " ' " in between the names.
If you remove that everything would pass.
You might want to use text editors to edit that and remove the " ' ".
Thanks!
I'm currently playing around with phpMyAdmin and I have encountered a problem. When importing my CSV into phpMyAdmin it's rounding the numbers. I have set the column to be a float and the column in Excel to be a Number (Also tried text/General) to no avail. Has anyone else encountered this issue and found a viable work-around?
A second question, is it possible for me to upload the CSV file so that it matches the column names in phpMyAdmin to Excel column names and enters the data in the correct column?
Your file should be look like this(decimal fields are of general type):
xlssheet
Save as CSV. File will be probably saved with ; separated
This is for new table:
Open phpMyAdmin, choose your database, click to import and select file to upload
Change format to CSV if there is not selected
Change in format specific options - columns separated with: ;
Be sure that checkbox (The first line of the file contains the table column names (if this is unchecked, the first line will become part of the data)) is SELECTED
Click Go
New table will be created with the structure according to the forst line in CSV.
This is for existing table:
Open phpMyAdmin, choose your database, CHOOSE YOUR TABLE which match the structure of imported file, click to import and select file to upload
Change format to CSV if there is not selected
Change in format specific options - columns separated with: ;
Change skip number of queries to 1 (this will skip the first line with column names)
Click Go
Selected table wich has the same structure as CSV will be updated and rows in CSV inserted.
// connecting dB
$mysqli = new mysqli('localhost','root','','testdB');
// opening csv
$fp = fopen('data.csv','r');
// creating a blank string to store values of fields of first row, to be used in query
$col_ins = '';
// creating a blank string to store values of fields after first row, to be used in query
$data_ins = '';
// read first line and get the name of fields
$data = fgetcsv($fp);
for($field=0;$field< count($data);$field++){
$col_ins = "'" . $col[$field] . "' , " . $col_ins;
}
// reading next lines and insert into dB
while($data=fgetcsv($fp)){
for($field=0;$field<count($data);$field++){
$data_ins = "'" . $data[$field] . "' , " . $data_ins;
}
$query = "INSERT INTO `table_name` (".$col_ins.") VALUES(".$data_ins.")";
$mysqli->query($query);
}
echo 'Imported...';
I've had the same issue.
Solved changing the separator between the integer part and the decimal part from comma to point.
i.e.
365,40 to 365.40
That worked for me.
Further to an earlier question here
Fast import of csv file into access database via VB.net 2010
I tried using the following code in my .NET application (VB.NET 2010)
cmd.CommandText =
"SELECT F1 AS id, F2 AS firstname " &
"INTO MyNewTable " &
"FROM [Text;FMT=Delimited;HDR=No;CharacterSet=850;DATABASE=C:\__tmp].table1.csv;"
and it seemed to work, but when I opened the database in Access the table showed garbled characters.
I think maybe CharacterSet=850 is not the correct setting for my CSV file. I tried searching for a character set list, but I couldn't find it.
My .csv file uses UTF-8. What should I use for the CharacterSet number?
The CharacterSet number for UTF-8 is CharacterSet=65001, so your CommandText should be
cmd.CommandText =
"SELECT F1 AS id, F2 AS firstname " &
"INTO MyNewTable " &
"FROM [Text;FMT=Delimited;HDR=No;CharacterSet=65001;DATABASE=C:\__tmp].table1.csv;"
Note also that this approach requires that the UTF-8 file be saved without a BOM (byte order mark), which is unusual for the Windows platform. (If the file does include a BOM then the first record will be imported as blank fields.)