I have set up a job within SSMS to run everyday where it executes queries against two tables and stores the results in .csv text files in a folder. It works fine but I would like the files to be comma delimited text files and it is storing it with spaces between the columns and this creates unnecessary increase in size..
I went to tool/options etc and set the sql query text file format to comma delimited and it works fine when I run a manual query and select to save the results but not within the actual job. Any suggestions
In fact if I could get the results from bot queries to store into one text file only that would be even better.
I think you may want to use
SSIS and send the output to a file in CSV format
BCP
SQLCMD
Related
I'm actually having problem generating a CSV file from a select statement that output a lot of rows (close to 10 M). I need to export the result of this statement in a CSV file and since i only have a Citrix VM, i'm being disconnect every 2h, which is not giving me enought time to execute my query.
My though was to start the query and use UTL_FILE to generate a CSV file on our server. But now, i am facing security issues -ยป I only have read access and cannot create procedure (as ASK TOM articlle https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9537857800346182134)
I want to know if you guys knows how could i generate a CSV file on my server by only executing a query and letting it goes?
Thanks a lot!
I got a Problem, cause I'm totally new to sql and have to kinda learn it in an internship. So I had to import huge txt files into a database in phpmyadmin (took me for ever but managed it with load data infile). Now my task is to find a way to control if the data of the tables is the same as the data of the given txt files.
Is there any possibility to do so ?
Have you tried exporting the data through phpMyAdmin using a different file format instead of .sql? phpMyAdmin gives you several choices including CSV, OpenOffice spreadsheets. That would make your compare easier. You could use Excel, sort the data and you'd have a quicker compare.
the best way to do so is to load, and then extract.
Compare your extract with the original file.
Another way could be to count the number of line in both table and file. And extract few lines, and verify that they both exists. This is less precise.
But this has nothing to do with SQL. It is just a test logic.
I have a 1.7GB txt file (about 1.5million rows) that is apparently formatted in some way for columns and rows, though I don't know the delimiter. I will need to be able to import this data into MySQL and MS SQL databases to run queries on.
I can't even open it in notepad to see a sample of the data.
For future reference, how does one handle and manipulate very large data files? What file format is best? To my knowledge Excel and CSV do not support unlimited numbers of rows.
You can use bcp in as below
bcp yourtable in C:\Data\yourfile.txt -c -t, -S localhost -T
Hence you know the column name from mysql, you can create table with that structure before hand in sql server
When I am exporting my query results from SQL Server 2008 to CSV or Tab Delimited txt format I always end up seeing extra records (that are not blank) when I open the exported file in Excel or import it into Access.
The SQL query results return 116623 rows
but when i export to csv and open with excel i see 116640 records. I tried importing the csv file into access and i also see extra records.
The weird thing is that if i add up the totals in excel up to row 116623 I have the correct totals meaning i have the right data up to that point but the extra 17 records after that are bad data which i don't know how it is being added.
Does anyone know what might be causing these extra records/rows to appear at the end of my CSV file?
The way i am exporting is by right clicking on the results and export to csv (comma delimited) or txt (tab delimited) files and both are causing the problem.
I would bet that in those huge number of rows you have some data that had a carriage return internal to the record (such as an address record that includes a line break). look for rows that have empty data in some fo the columns you would expect data in. I ususally reimport the file to a work table (with an identity so you can identify which rows are near the bad ones) and then run queries on it to find the ones that are bad.
Actually, there is a bug in the export results as feature. After exporting the results, open csv file in a Hex editor and look up unique key of last record. You will find it towards the end of the file. Find the OD OA for that record and delete everything else that follows. It's not Excel or Access. For some reason SQL just can't export a csv without corrupting the end of the file.
I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program.
Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as:
INSERT INTO table SET field1='a';
INSERT INTO table SET field1='tommy';
INSERT INTO table SET field1='2';
I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor.
Thanks for your help in advance.
I never received an answer, but I will tell you what I did to get by.
1. Ran mysqlbinlog to a textfile
2. Created a PHP script that uses fgets to read each line of the log
3. While looping through each line, the script parses it using the stristr function
4. If the line matches the string I am looking for, it logs the line to a file
It takes a while to run mysqlbinlog and the PHP script, but it no longer times out. I originally used fread in PHP, but that reads the entire file into memory and caused the script to crash on large (1G) log files. Now, it takes several minutes to run (I also set the max_execution_time variable to be larger), but it works like a charm. fgets gets one line at a time, so it doesn't take up nearly as much memory.