I have imported a csv file from MySQL, documenting part numbers and descriptions. Some of these part numbers have values like 1234567890987654321, which is then shortened by excel to 1.23e18. Problem is, I cannot query a part with this formatted data.
Now I cannot feasibly go through every cell as there are just over 28000 of them. I have converted the row to text however this does not change the data in the cell.
The closest thing I have to a solution is deleting the cells and then undo-ing, which gets the number in a textual format but then gives me a 'number in text field' error.
Also some parts have part numbers like 12E345 which is then changed to 1200000000000000000000000000000000000000000000- you get the picture.
Very annoying...
I would like a batch process to change all the values to text format, thanks in advance.
Instead of just opening the CSV in Excel, import it with Data -> External Sources -> From Text.
You will have to first pick basic things like "delimited" format, whether first row contains headers, separator etc.
In the third part of the "Text Import wizard", you can pick data type of each column - picking Text for your columns will probably solve the problem.
Related
long time viewer, first time poster.
The issue I am struggling with relates to how my timestamp data appears in excel once I've run my code in snowflake and exported it to CSV. Unfortunately at the moment I am required to double click on these timestamp cells, once in exported into excel, for the true format (how it appears in results in snowflake) to appear.
There is a manual workaround to amend once the data is in excel, however I am automating this report for a client and therefore it must be presented correctly for them prior to the export.
As it stands (see below) the timestamp begins with YYYY-MM-DD and I have been asked to flip this firstly, to begin with DD. I thought that given I need to reformat the current timestamp, I may as well set it up correctly so the timestamp perhaps comes out as text, within the csv export? (from what I have read in other forums if you convert to text it is displayed in excel exactly how it appears in snowflake).
image.png
As you can see, I continue to get the "Cant Parse" error. The example timestamp given, is row 1 of the 'QuoteDate' variable.
The second part of the issue (or really the primary part of the issue) is how the timestamp completely changes format when exported (CSV) to excel. In the screen shot below I have double clicked the first 3 rows, leaving row 4 selected in order to show you all the error - which is the formula bar displays the correct format but the cell itself does not.
image.png
I hope this all makes sense and would love any assistance on how to amend this timestamp issue to run the code correctly, and present my client with a properly formatted timestamp within their extract.
Thank you :)enter image description here
For changing format
SELECT to_char ( to_timestamp ('2019-01-02 09:36:22.507' , 'YYYY-MM-DD HH24:MI:SS.FF') , 'DD-MM-YYYY HH24:MI:SS.FF') AS TS
Timestamp/dates are stored as NUMBER in database, you need to convert in required format for correct display.
to_timestamp --> Converts an input expression into the corresponding timestamp. This expects the input to be in the format provided as second argument.
to_char --> Converts the input expression to a string format.
For preserving data format while converting from csv to excel check Saving to CSV in Excel loses regional date format
I'm using a .XLS template instead of .RTF.
The problem is, even when I format the excel field cell to Text, I'm still getting #NUM! in the output.
The template is considering the data as an exponential value because of the character "E" in between the numbers. The exact value to be displayed in output is "12E18157" but I get #NUM! no matter what I do.
Any suggestions?
I was not successful in trying to format the .xls template cell because no matter how we format the cell, the template is considering the value as exponential value so I have concatenated a space for that particular column in my query(XML Data Definition file) itself and it fixed the problem.
I am trying to export html data into excel but I am facing problem with date values as:
I am expecting
<pre>"<tr><td style='mso-number-format:d\-mmm\-yyyy' >Mar-21-2014</td></tr>"</pre>
to be 21- Mar-2014
but when I open excel I found it Mar-21-2014 (without change)!!
why does this happen ??
Excel does not recognise this as a date, probably because your regional settings have DMY order and the text is in MDY order.
After importing the text into Excel, select the column, then click Data > Text To Columns > Next > Next
Now you are in Step 3 of the Text To Columns wizard. Click the Date radio button and in the drop-down next to it select MDY (i.e. the order in the imported text). Then click Finish and all text values will be real dates in your regional setting date order.
Edit after comment:
The mso-number-format does not look quite right. I'm not too familiar with it, but as far as I know the format itself must be in quotes. See this other Stackoverflow thread for some scenarios
Even if your style tag syntax was correct, you must have the unformatted date in a format that your Excel will normally recognize as a date. If Excel does not recognize it as a date, it will consider the value as text and you cannot format text into a date. You need to start with a date.
I am using Office 2003
On Access, I export values from a form into a .xls, after that using mail merge on word I import the data to be displayed on the file. Data such as dates and strings are displayed correctly.
In Access theres the value 9,916.12 wich is exported to the .xls as 'price' that contains 9,916.12, both values match keeping the same format, but when mail merge kicks in the value displayed on the document becomes 9916,1200000000008.
I am lost as to what is causing that. The field that is being exported contains only 2 decimals, it displays on excel as a value with only 2 decimals yet when word reads it, it adds random decimals to it. If I manually alter the value on excel the error persists, so does it if I choose a different record to be exported.
Any tips on how to solve the problem?
See this Microsoft Answers discussion and this in-depth description on how to use a merge field such as {Mergefield NumberFieldName \# ",0.00"} to work around the issue. It's been awhile since I had to do mail merges, especially with Word 2003, but I think that should do it.
I'm getting some import errors when importing data from Excel in to Access.
I'm using the DoCmd.TransferSpreadsheet acImport, ... method to do the import.
The column of data that is failing contains a mixture of number-only entries and strings. It's the string entries that are failing to import.
Is there something I can do to the data in this column of the Excel spreadsheet to ensure it gets across to Access in its entirety?
While your Excel.Application code from your previous question is in there "counting rows" it could also inspect the cell for that column in the first data row. If it is numeric, your code could glue an apostrophe (') at the beginning to force it to be a label, and then save the Excel file. Then, when Access' TransferSpreadsheet method looks at the first row it will decide that the column is text, not numeric.
I found something approaching a workaround:
Sort your offending column in Excel so that your numerics appear as the top-most group and your alphanumerics are at the bottom.
Highlight all your numerics now grouped in the column
Go to Data > Text to Columns
Wizard Page 1: Select "Delimited"
Wizard Page 2: Tick "Tab"
Wizard Page 3: Select "Text" and finish
Numbers will be stored as text with the leading apostrophe and should import ok in Access
Source
There's probably a better way of doing this though if anyone wants to chip-in.