How to store a String (length > 255) from a query? - ms-access

I'm using Access 2000 and I have a query like this:
SELECT function(field1) AS Results FROM mytable;
I need to export the results as a text file.
The problem is:
function(field1) returns a fairly long string (more than 255 char) that cannot be entirely stored in the Results field created from this query.
When i export this query as a text file, i can't see the string entirely. (truncated)
Is it possible to cast function(field1) so it returns a Memo type field containing the string ?
Something like this:
SELECT (MEMO)function(field1) AS Results FROM mytable;
Do you know others solutions?

There is an official microsoft support page on this problem:
ACC2000: Exported Query Expression Truncated at 255 Characters
They recommend that you append the expression data to a table that has a memo field, and export it from there. It's kinda an ugly solution, but you cannot cast parameters to types in MS Access, so it might be the best option available.

i don't know how to do quite what you're hoping (which makes sense) but a possible alternative could be to create 2 or 3 fields (or separate queries) and extract different portions of the text into each then concat after retrieved.
pseudo: concat((chars 1-255) & (chars 256-510) & (chars 511-etc...))
edit: it's odd that a string longer than 255 is stored but it's not memo. what's up there? another alternative, if you have access to the db, is change the field type. (backup the db first!)

Related

BCP CHAR value to Snowflake

I am trying to create a BCP file with | delimiter and then load it to a snowflake table.
Issue:
in SQL server there are columns defined as CHAR(4) and have values "sss"
so when i do BCP the its being padded to length of 4 "sss " and being loaded to snowflake
due to which our reports are failing because they do something like where column="SSS" but due to trailing space in snowflake the correct columns are not showing up.
we do not want to change our reports. So, is there a way that BCP can handle the padding or trimming of these columns?
note that there 24 tables and each have around 130+ columns so i cant go and put Trim functions on each char column
If your BCP file is maintaining the trailing space, then Snowflake will retain it, too, as long as the field is being FIELD_OPTIONALLY_ENCLOSED_BY a " or '. You may also want to make sure your TRIM_SPACE option is correctly set on your format definition for your COPY INTO command.
If your BCP file isn't maintaining the space and you can't figure out how to get that to work, you could force the space back in during the COPY INTO command with some string functions in your SELECT, or you could create a view for your report that does the same set of string functions to force the space for your report to work from.
So, is there a way that BCP can handle the padding or trimming of these columns?
Yes, but not by some switch or option. The correct way to handle this is to set your datatypes up front. As someone mentioned in comments to your question, your query that is creating BCP output should use VARCHAR(4) instead of CHAR(4). BCP is giving you what you asked of it. They way to avoid whitespace is to use varchar.
Seems like a fairly quick "find and replace" against scripted out query objects would work fine but you know your situation best.
Additionally, "trim" wont work - FYI. Even if the value of the field was only "SSS" (as in your example); if the result/column is defined as CHAR(4) you will get 4 bytes of data and a blank in the 4th place since you only had 3 bytes of data. Trim will work during the query... the padded " " you are getting is placed there by the copy out. The way to correct this is to set your data types as you need up front.
Unless someone knows of a better way in snowflake (im not familiar with it) the only other option is to manipulate the file inbetween SQL and Snowflake. replace " |" with "|"... but... blech.
This is a known "issue" with BCP. The "solution" is to use the queryout option, which means you must include a query with every export. But the data are the way they are.
Eg: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/88c258fe-d1a6-4f3a-9dac-40388d04e9c7/remove-space-in-columns-on-bcp-out?forum=transactsql
But this is really a Snowflake problem, because Snowflake has its own default CHAR semantics.
You get a warning in the documentation String & Binary Data Types but that doesn't tell the whole truth.
The following executed on Oracle (and apparently MSSQL? MySQL?) will select the aaa line:
CREATE TABLE C AS SELECT CAST('aaa ' AS CHAR(4)) t FROM DUAL;
SELECT * FROM C WHERE t = 'aaa';
but won't on Snowflake, unless you create the column with COLLATION:
CREATE OR REPLACE TABLE C (t CHAR(4) COLLATE 'en_US-rtrim');
INSERT INTO C VALUES('aaa ');
SELECT * FROM C WHERE t = 'aaa';
Unfortunately, you can't ALTER the collation after creation, which would have been convenient after a COPY INTO <table>.
PS: Mike Walton's answer is better, TRIM_SPACE is much cleaner than COLLATE.

MySQL export to MS Excel : 1-9 becomes 9-Jan or 42013

I use Workbench to query database at work. We have a field which indicates company size and has the following options :
1-9
10-49
50-99
100-499
500+
When I export the results containing this field in Excel(which I use for analysis), 1-9 becomes 9-Jan, when I change the format of the cell to text, it becomes 42013. Similarly, 10-49 becomes Oct-50 and in text - 18537. Is there a way to avoid this?
I know this may seem trivial but I take a download of the results every couple of hours or so, and currently, I use the Replace function in Excel to fix this which is a time cost. Also, adding manual intervention increases the probability of error which I want to minimize. I would ideally like the result to export as 1-9, as it exists in the database, based on which the analytical model is built to take input.
I would appreciate any help or pointers on how to fix this issue.
Thanks!
You are not saying how you are bringing the data into Excel. The simplest method is to bring the column in as "text". You can do this when you are importing the data into Excel, by setting the column type to "text".
Alternatively, when you create the output file, you can prepend the value with a single quote or some other character:
select concat('''', company_size)
select concat('_', company_size)
Appreciate the help! I used to export the results in CSV which caused the problem I think. I exported them in XML and that solved it, the fields appear as they exist in the database. Thanks a lot. The concatenation would work as well!

How to replace all occurrences of matching string in a database table using ColdFusion

Working with a MS Access database, using one particular table, and scattered throughout the table at varying positions in date columns (which themselves can be in varying orders as a result of the data import) is the text "Not known". I want to replace occurrences of that text string across the whole data table.
The only way I can think of doing it is export to a csv format, and do a REReplace then import the data again, but I would like to know if there is a 'slicker' way?
The columns contain data which is a data import from a csv file so all the columns are text, they can contain a mix of "date string", text, numbers (as string) and null.
You can use replace, it follows basic TSQL implementation :
http://msdn.microsoft.com/en-us/library/ms186862.aspx
Here is an example I did updating the customers table of the Northwind sample database:
update customers set Customers.[Job Title] = replace( Customers.[Job Title], 'Purchasing', 'Manufacturing');
So to distill it into a generic example :
update TABLENAME set FIELD =
replace( FIELD, 'STRING_TO_REPLACE', 'STRING_TO_REPLACE_WITH' )
That updates the entire table in one statement. Be careful ;)
You can do this using Access, running edit-replace command. If you need to do this in code - you can open recordset, loop through records and for each field run:
rst.fields(i)=replace(rst.fields(i),"Not known","Something")
this is how it works in VBA, beleive you can do something similar in coldfusion
Why not just open the CSV file in Notepad++ (or similar) and do a Find/Replace?

SSIS ISNULL to empty string

So I am currently working on a migration from an old Advantage database server to SQL 2005 using SSIS 2008. One of the columns in the old Advantage database is a MEMO type. By default this translates to a DT_TEXT column. Well in the new database I do not need this large of field, but can limit it to something such as VARCHAR(50). I successfully set up a derived column transformation to convert this with the following expression:
(DT_STR,50,1252)[ColumnName]
Now I want to go a step further and replace all NULL values with an empty string. This would seem easy enough using an ISNULL([ColumnName])?"":(DT_STR,50,1252)[ColumnName] expression, but the problem is that the OLE DB Destination contains the following error
Cannot convert between unicode and non-unicode strings...
So apparently the whole ISNULL expression converts the data type to Unicode string [DT-WSTR]. I have tried a variety of casts upon the whole expression or different parts, but I cannot get the data type to match what I need it.
First, is it possible to convert the DT_TEXT type directly to unicode? From what I can tell, the casts don't work that way. If not, is there a way to get an expression to work so that NULL values get converted to empty strings?
Thank you for all your help!
Give this a try in your derived column.
(DT_STR,50,1252) (ISNULL(ColumnName) ? "" : (DT_STR,50,1252) ColumnName)
It includes an additional type cast with the Conditional (?:) in parentheses to ensure the desired processing sequence. I think your original expression was implicitly casting to DT_WSTR because the "" defaults to DT_WSTR. With this new version, you force the cast to DT_STR after the expression is evaluated.
I figured something out that works. It may not be the best solution, but it will work for my situation.
From my OLE DB source I first did a Derived Column. This I used the ISNULL which ended up converting it to a DT_WSTR unicode type. although I could not get any casts to get it back to the type required, I then added a Data Conversion transformation in-between the Derived Column and the OLE DB Destination. This would take the input string and convert it back to a DT_STR. This all feels a little annoying converting so many times, but the column does not contain any funky information that I should have to worry about, so I suppose it will work.
Thanks for all those who pondered the solution, and if you find some awesome way to tackle it, I would be more than interested.

How do I get SSIS Data Flow to put '0.00' in a flat file?

I have an SSIS package with a Data Flow that takes an ADO.NET data source (just a small table), executes a select * query, and outputs the query results to a flat file (I've also tried just pulling the whole table and not using a SQL select).
The problem is that the data source pulls a column that is a Money datatype, and if the value is not zero, it comes into the text flat file just fine (like '123.45'), but when the value is zero, it shows up in the destination flat file as '.00'. I need to know how to get the leading zero back into the flat file.
I've tried various datatypes for the output (in the Flat File Connection Manager), including currency and string, but this seems to have no effect.
I've tried a case statement in my select, like this:
CASE WHEN columnValue = 0 THEN
'0.00'
ELSE
columnValue
END
(still results in '.00')
I've tried variations on that like this:
CASE WHEN columnValue = 0 THEN
convert(decimal(12,2), '0.00')
ELSE
convert(decimal(12,2), columnValue)
END
(Still results in '.00')
and:
CASE WHEN columnValue = 0 THEN
convert(money, '0.00')
ELSE
convert(money, columnValue)
END
(results in '.0000000000000000000')
This silly little issue is killin' me. Can anybody tell me how to get a zero Money datatype database value into a flat file as '0.00'?
I was having the exact same issue, and soo's answer worked for me. I sent my data into a derived column transform (in the Data Flow Transform toolbox). I added the derived column as a new column of data type Unicode String ([DT_WSTR]), and used the following expression:
Price < 1 ? "0" + (DT_WSTR,6)Price : (DT_WSTR,6)Price
I hope that helps!
Could you use a Derived Column to change the format of the value? Did you try that?
I used the advanced editor to change the column from double-precision float to decimal and then set the Scale to 2:
Since you are exporting to text file, just export data preformatted.
You can do it in the query or create a derived column, whatever you are more comfortable with.
I chose to make the column 15 characters wide. If you import into a system that expects numbers those zeros should be ignored...so why not just standardize the field length?
A simple solution in SQL is as follows:
select
cast(0.00 as money) as col1
,cast(0.00 as numeric(18,2)) as col2
,right('000000000000000' + cast( 0.00 as varchar(10)), 15) as col3
go
col1 col2 col3
--------------------- -------------------- ---------------
.0000 .00 000000000000.00
Simply replace '0.00' with your column name and don't forget to add the FROM table_name, etc..
It is good to use derived column and need to check the condition as well
pricecheck <=0 ? "0" + (DT_WSTR,10)pricecheck : (DT_WSTR,10)pricecheck
or alternative way is to use vb script
Ultimately what I ended up doing was using the FORMAT() function.
CAST(FORMAT(balance, '0000000000.0000') AS varchar(30)) AS "balance"
This does have some significant CPU performance impact (often at least an order of magnitude) due to the way SQL Server implements that function, but nothing worked easier, more correctly, or more consistently for me. I was working with less than 100,000 rows and the package executes no more than once an hour. Going from 100ms to 1000ms just wasn't a big deal in my situation.
The FORMAT() function returns an nvarchar(4000) by default, so I also cast it back to a varchar of appropriate size since my output file needed to be in Windows-1252 encoding. Transcoding text is much more obnoxious in SSIS than it has any right to be.