Convert integer to date DDMMYYYY - ssis

I am uploading an excel data sheet. In the sheet I have a numeric column which I want to convert to date. So 40955 should look like 04.09.1955 (DDMMYYYY)
Can someone help me out here. I tried using Data Conversion transformation component and its showing me error.
PP

Main obstacle here is that your values are not in an easy to use format.
To do what you specify it needs to break up the value into its parts, concatenate again and then convert. All this can be done in a single statement. For explanation I show the steps below.
DECLARE
#someval int = 40955,
#dateval int,
#dated date
;
SELECT
-- single extraction steps
#someval % 100 AS yearval,
( #someval / 100 ) % 100 AS monthval,
( #someval / 10000 ) AS dayval
;
SELECT
--#dateval =
-- extract year and push it to front
( #someval % 100 ) * 10000
-- extract month and push into middle
+ ( #someval / 100 ) % 100 * 100
-- extract day and keep at end
+ ( #someval / 10000 )
;
SELECT
-- clip all elements into single integer
#dateval =
( #someval % 100 ) * 10000
+ ( #someval / 100 ) % 100 * 100
+ ( #someval / 10000 )
;
SELECT
-- 112 = yyyymmdd format
#dated = CONVERT( date, CAST( #dateval AS varchar(8) ), 112 )
;
SELECT
-- show as standard (format 120) date aka ISO 8601 readable
#dated AS Dated
;
However I suspect that the value you receive from Excel is kind of Julian date. In this case the following answer will provide a solution:
convert Excel Date Serial Number to Regular Date
Keep in mind that in SSIS you need to wrap this coding into either a column or a transformation.

Related

SUM time values MySQL [duplicate]

This question already has answers here:
Surpassing MySQL's TIME value limit of 838:59:59
(7 answers)
Closed 4 years ago.
I am trying to sum time values and have it in the format of hours:minutes:seconds i.e. 100:30:10.
SEC_TO_TIME(SUM(TIME_TO_SEC(ActualHours))) AS Hours
But I'm having a problem because time's max value is 838:59:59.
So if summing the time is over this value it won't show i.e. if it equals 900 hours it will show as 838:59:59 which is wrong.
How do I the display the total hours if it is over 838:59:59?
If I had to do this conversion in SQL, I would do something like this:
SELECT CONCAT( ( _secs_ DIV 3600)
, ':'
, RIGHT(CONCAT('0',( _secs_ DIV 60 ) MOD 60 ),2)
, ':'
, RIGHT(CONCAT('0',( _secs_ MOD 60)),2)
) AS `h:mm:ss`
We can just replace the _secs_ with the expression that returns the number of seconds we want to convert. Using the expression given in the question, we get something like this:
SELECT CONCAT( ( SUM(TIME_TO_SEC(ActualHours)) DIV 3600)
, ':'
, RIGHT(CONCAT('0',( SUM(TIME_TO_SEC(ActualHours)) DIV 60 ) MOD 60 ),2)
, ':'
, RIGHT(CONCAT('0',( SUM(TIME_TO_SEC(ActualHours)) MOD 60)),2)
) AS `h:mm:ss`
DEMONSTRATION
The syntax provided in this answer is valid in MySQL 5.6. As a demonstration, using a user-defined variable #_secs as the expression number of seconds:
Set user-defined variable for demonstration:
SELECT #_secs_ := ( 987 * 3600 ) + ( 5 * 60 ) + 7 ;
returns
#_secs := ( 987 * 3600 ) + ( 5 * 60 ) + 7
-----------------------------------------
3553507
demonstrating the query pattern:
SELECT CONCAT( ( #_secs_ DIV 3600)
, ':'
, RIGHT(CONCAT('0',( #_secs_ DIV 60 ) MOD 60 ),2)
, ':'
, RIGHT(CONCAT('0',( #_secs_ MOD 60)),2)
) AS `hhh:mm:ss`
returns
hhh:mm:ss
---------
987:05:07
Here is one way we can do this:
SELECT
CONCAT(CAST(FLOOR(seconds / 3600) AS CHAR(50)), ':',
CAST(FLOOR(60*((seconds / 3600) - FLOOR(seconds / 3600))) AS CHAR(50)), ':',
CAST(seconds % 60 AS CHAR(50))) AS time
FROM yourTable;
For an input of 10,000,000 (ten million) seconds, this would generate:
2777:46:40
Demo
Use some simple math to concat a time period from seconds,replace 35000 with your column.
SELECT CONCAT(FLOOR(35000/3600),':',FLOOR((35000%3600)/60),':',(35000%3600)%60)
A fiddle to play with

SQL Server - single Inline Query - (decimal remainder of x/y (rounded to 6 characters) ) / z

Can I ask for help on a SQL Statement please, I have to do the calculation inline and cannot declare variables for it
Calculation:
-91000000 / 2700000 = -33.7037037037
I need the remainder (7037037037 - but only up to 6 characters ) to be multiplied by 15000
703703 / 15000 = Final Answer of 49.913533
I thought I could do this:
select cast(ParseName(abs(cast(-91000000 as decimal)/ 2700000 ) %1,1) as numeric(8,8)) / 15000
WITH cte AS
(
SELECT -91000000 AS x, 2700000 AS y
)
SELECT ABS(ROUND((CAST(x AS decimal) / CAST(y AS decimal)) - (x/y), 6)) * 1000000 / 15000 FROM CTE

MySQL slope (trend) of single field (line of best fit)

I have a simple table called LOGENTRY with fields called "DATE" and "COST". Example:
+--------------+-------+
| DATE | COST |
+--------------+-------+
| MAY 1 2013 | 0.8 |
| SEP 1 2013 | 0.4 |
| NOV 1 2013 | 0.6 |
| DEC 1 2013 | 0.2 |
+--------------+-------+
I would like to find the slope of the COST field over time (a range of rows selected), resulting in
SLOPE=-0.00216 (This is equivalent to Excel's SLOPE function, aka linear regression).
Is there a simple way to SELECT the slope of COST? If I do the math in the calling language (php) I can find slope as:
SLOPE = (N * Sum_XY - Sum_X * Sum_Y)/(N * Sum_X2 - Sum_X * Sum_X);
I saw some similar questions posted but they are more complex. I'm trying to strip this example down to the simplest situation - so I can understand the answer :) Here's as close as I got...but MYSQL complains about the syntax near:
'float)) AS Sum_X, SUM(CAST(LOGENTRY.DATE as float) * CAST(LOGENTRY.DATE'
SELECT
COUNT( * ) AS N,
SUM( CAST( LOGENTRY.DATE AS FLOAT ) ) AS Sum_X,
SUM( CAST( LOGENTRY.DATE AS FLOAT ) * CAST( LOGENTRY.DATE AS FLOAT ) ) AS Sum_X2,
SUM( LOGENTRY.COST ) AS Sum_Y, SUM( LOGENTRY.COST * LOGENTRY.COST ) AS Sum_Y2,
SUM( CAST( LOGENTRY.DATE AS FLOAT ) * LOGENTRY.COST ) AS Sum_XY
FROM LOGENTRY
It seems that MySQL cannot cast a date as float (as per the other examples in stackoverflow). Perhaps the other examples refer to another database. So by converting dates to unix_timestamps I am able to get an answer...with the final calculation in PHP. If this is WRONG...please post and I will remove answer...
SELECT
COUNT(*) AS N,
SUM(UNIX_TIMESTAMP(LOGENTRY.DATE)) AS Sum_X,
SUM(UNIX_TIMESTAMP(LOGENTRY.DATE) * UNIX_TIMESTAMP(LOGENTRY.DATE)) AS Sum_X2,
SUM(LOGENTRY.COST) AS Sum_Y,
SUM(LOGENTRY.COST*LOGENTRY.COST) AS Sum_Y2,
SUM(UNIX_TIMESTAMP(LOGENTRY.DATE) * LOGENTRY.COST) AS Sum_XY
FROM LOGENTRY

Why use the SQL Server 2008 geography data type?

I am redesigning a customer database and one of the new pieces of information I would like to store along with the standard address fields (Street, City, etc.) is the geographic location of the address. The only use case I have in mind is to allow users to map the coordinates on Google maps when the address cannot otherwise be found, which often happens when the area is newly developed, or is in a remote/rural location.
My first inclination was to store latitude and longitude as decimal values, but then I remembered that SQL Server 2008 R2 has a geography data type. I have absolutely no experience using geography, and from my initial research, it looks to be overkill for my scenario.
For example, to work with latitude and longitude stored as decimal(7,4), I can do this:
insert into Geotest(Latitude, Longitude) values (47.6475, -122.1393)
select Latitude, Longitude from Geotest
but with geography, I would do this:
insert into Geotest(Geolocation) values (geography::Point(47.6475, -122.1393, 4326))
select Geolocation.Lat, Geolocation.Long from Geotest
Although it's not that much more complicated, why add complexity if I don't have to?
Before I abandon the idea of using geography, is there anything I should consider? Would it be faster to search for a location using a spatial index vs. indexing the Latitude and Longitude fields? Are there advantages to using geography that I am not aware of? Or, on the flip side, are there caveats that I should know about which would discourage me from using geography?
Update
#Erik Philips brought up the ability to do proximity searches with geography, which is very cool.
On the other hand, a quick test is showing that a simple select to get the latitude and longitude is significantly slower when using geography (details below). , and a comment on the accepted answer to another SO question on geography has me leery:
#SaphuA You're welcome. As a sidenote be VERY carefull of using a
spatial index on a nullable GEOGRAPHY datatype column. There are some
serious performance issue, so make that GEOGRAPHY column non-nullable
even if you have to remodel your schema. – Tomas Jun 18 at 11:18
All in all, weighing the likelihood of doing proximity searches vs. the trade-off in performance and complexity, I've decided to forgo the use of geography in this case.
Details of the test I ran:
I created two tables, one using geography and another using decimal(9,6) for latitude and longitude:
CREATE TABLE [dbo].[GeographyTest]
(
[RowId] [int] IDENTITY(1,1) NOT NULL,
[Location] [geography] NOT NULL,
CONSTRAINT [PK_GeographyTest] PRIMARY KEY CLUSTERED ( [RowId] ASC )
)
CREATE TABLE [dbo].[LatLongTest]
(
[RowId] [int] IDENTITY(1,1) NOT NULL,
[Latitude] [decimal](9, 6) NULL,
[Longitude] [decimal](9, 6) NULL,
CONSTRAINT [PK_LatLongTest] PRIMARY KEY CLUSTERED ([RowId] ASC)
)
and inserted a single row using the same latitude and longitude values into each table:
insert into GeographyTest(Location) values (geography::Point(47.6475, -122.1393, 4326))
insert into LatLongTest(Latitude, Longitude) values (47.6475, -122.1393)
Finally, running the following code shows that, on my machine, selecting the latitude and longitude is approximately 5 times slower when using geography.
declare #lat float, #long float,
#d datetime2, #repCount int, #trialCount int,
#geographyDuration int, #latlongDuration int,
#trials int = 3, #reps int = 100000
create table #results
(
GeographyDuration int,
LatLongDuration int
)
set #trialCount = 0
while #trialCount < #trials
begin
set #repCount = 0
set #d = sysdatetime()
while #repCount < #reps
begin
select #lat = Location.Lat, #long = Location.Long from GeographyTest where RowId = 1
set #repCount = #repCount + 1
end
set #geographyDuration = datediff(ms, #d, sysdatetime())
set #repCount = 0
set #d = sysdatetime()
while #repCount < #reps
begin
select #lat = Latitude, #long = Longitude from LatLongTest where RowId = 1
set #repCount = #repCount + 1
end
set #latlongDuration = datediff(ms, #d, sysdatetime())
insert into #results values(#geographyDuration, #latlongDuration)
set #trialCount = #trialCount + 1
end
select *
from #results
select avg(GeographyDuration) as AvgGeographyDuration, avg(LatLongDuration) as AvgLatLongDuration
from #results
drop table #results
Results:
GeographyDuration LatLongDuration
----------------- ---------------
5146 1020
5143 1016
5169 1030
AvgGeographyDuration AvgLatLongDuration
-------------------- ------------------
5152 1022
What was more surprising is that even when no rows are selected, for example selecting where RowId = 2, which doesn't exist, geography was still slower:
GeographyDuration LatLongDuration
----------------- ---------------
1607 948
1610 946
1607 947
AvgGeographyDuration AvgLatLongDuration
-------------------- ------------------
1608 947
If you plan on doing any spatial computation, EF 5.0 allows LINQ Expressions like:
private Facility GetNearestFacilityToJobsite(DbGeography jobsite)
{
var q1 = from f in context.Facilities
let distance = f.Geocode.Distance(jobsite)
where distance < 500 * 1609.344
orderby distance
select f;
return q1.FirstOrDefault();
}
Then there is a very good reason to use Geography.
Explanation of spatial within Entity Framework.
Updated with Creating High Performance Spatial Databases
As I noted on Noel Abrahams Answer:
A note on space, each coordinate is stored as a double-precision floating-point number that is 64 bits (8 bytes) long, and 8-byte binary value is roughly equivalent to 15 digits of decimal precision, so comparing a decimal(9,6) which is only 5 bytes, isn't exactly a fair comparison. Decimal would have to be a minimum of Decimal(15,12) (9 bytes) for each LatLong (total of 18 bytes) for a real comparison.
So comparing storage types:
CREATE TABLE dbo.Geo
(
geo geography
)
GO
CREATE TABLE dbo.LatLng
(
lat decimal(15, 12),
lng decimal(15, 12)
)
GO
INSERT dbo.Geo
SELECT geography::Point(12.3456789012345, 12.3456789012345, 4326)
UNION ALL
SELECT geography::Point(87.6543210987654, 87.6543210987654, 4326)
GO 10000
INSERT dbo.LatLng
SELECT 12.3456789012345, 12.3456789012345
UNION
SELECT 87.6543210987654, 87.6543210987654
GO 10000
EXEC sp_spaceused 'dbo.Geo'
EXEC sp_spaceused 'dbo.LatLng'
Result:
name rows data
Geo 20000 728 KB
LatLon 20000 560 KB
The geography data-type takes up 30% more space.
Additionally the geography datatype is not limited to only storing a Point, you can also store LineString, CircularString, CompoundCurve, Polygon, CurvePolygon, GeometryCollection, MultiPoint, MultiLineString, and MultiPolygon and more. Any attempt to store even the simplest of Geography types (as Lat/Long) beyond a Point (for example LINESTRING(1 1, 2 2) instance) will incur additional rows for each point, a column for sequencing for the order of each point and another column for grouping of lines. SQL Server also has methods for the Geography data types which include calculating Area, Boundary, Length, Distances, and more.
It seems unwise to store Latitude and Longitude as Decimal in Sql Server.
Update 2
If you plan on doing any calculations like distance, area, etc, properly calculating these over the surface of the earth is difficult. Each Geography type stored in SQL Server is also stored with a Spatial Reference ID. These id's can be of different spheres (the earth is 4326). This means that the calculations in SQL Server will actually calculate correctly over the surface of the earth (instead of as-the-crow-flies which could be through the surface of the earth).
Another thing to consider is the storage space taken up by each method. The geography type is stored as a VARBINARY(MAX). Try running this script:
CREATE TABLE dbo.Geo
(
geo geography
)
GO
CREATE TABLE dbo.LatLon
(
lat decimal(9, 6)
, lon decimal(9, 6)
)
GO
INSERT dbo.Geo
SELECT geography::Point(36.204824, 138.252924, 4326) UNION ALL
SELECT geography::Point(51.5220066, -0.0717512, 4326)
GO 10000
INSERT dbo.LatLon
SELECT 36.204824, 138.252924 UNION
SELECT 51.5220066, -0.0717512
GO 10000
EXEC sp_spaceused 'dbo.Geo'
EXEC sp_spaceused 'dbo.LatLon'
Result:
name rows data
Geo 20000 728 KB
LatLon 20000 400 KB
The geography data-type takes up almost twice as much space.
CREATE FUNCTION [dbo].[fn_GreatCircleDistance]
(#Latitude1 As Decimal(38, 19), #Longitude1 As Decimal(38, 19),
#Latitude2 As Decimal(38, 19), #Longitude2 As Decimal(38, 19),
#ValuesAsDecimalDegrees As bit = 1,
#ResultAsMiles As bit = 0)
RETURNS decimal(38,19)
AS
BEGIN
-- Declare the return variable here
DECLARE #ResultVar decimal(38,19)
-- Add the T-SQL statements to compute the return value here
/*
Credit for conversion algorithm to Chip Pearson
Web Page: www.cpearson.com/excel/latlong.aspx
Email: chip#cpearson.com
Phone: (816) 214-6957 USA Central Time (-6:00 UTC)
Between 9:00 AM and 7:00 PM
Ported to Transact SQL by Paul Burrows BCIS
*/
DECLARE #C_RADIUS_EARTH_KM As Decimal(38, 19)
SET #C_RADIUS_EARTH_KM = 6370.97327862
DECLARE #C_RADIUS_EARTH_MI As Decimal(38, 19)
SET #C_RADIUS_EARTH_MI = 3958.73926185
DECLARE #C_PI As Decimal(38, 19)
SET #C_PI = pi()
DECLARE #Lat1 As Decimal(38, 19)
DECLARE #Lat2 As Decimal(38, 19)
DECLARE #Long1 As Decimal(38, 19)
DECLARE #Long2 As Decimal(38, 19)
DECLARE #X As bigint
DECLARE #Delta As Decimal(38, 19)
If #ValuesAsDecimalDegrees = 1
Begin
set #X = 1
END
Else
Begin
set #X = 24
End
-- convert to decimal degrees
set #Lat1 = #Latitude1 * #X
set #Long1 = #Longitude1 * #X
set #Lat2 = #Latitude2 * #X
set #Long2 = #Longitude2 * #X
-- convert to radians: radians = (degrees/180) * PI
set #Lat1 = (#Lat1 / 180) * #C_PI
set #Lat2 = (#Lat2 / 180) * #C_PI
set #Long1 = (#Long1 / 180) * #C_PI
set #Long2 = (#Long2 / 180) * #C_PI
-- get the central spherical angle
set #Delta = ((2 * ASin(Sqrt((power(Sin((#Lat1 - #Lat2) / 2) ,2)) +
Cos(#Lat1) * Cos(#Lat2) * (power(Sin((#Long1 - #Long2) / 2) ,2))))))
If #ResultAsMiles = 1
Begin
set #ResultVar = #Delta * #C_RADIUS_EARTH_MI
End
Else
Begin
set #ResultVar = #Delta * #C_RADIUS_EARTH_KM
End
-- Return the result of the function
RETURN #ResultVar
END

How to convert float to varchar in SQL Server

I have a float column with numbers of different length and I'm trying to convert them to varchar.
Some values exceed bigint max size, so I can't do something like this
cast(cast(float_field as bigint) as varchar(100))
I've tried using decimal, but numbers aren't of the same size, so this doesn't help too
CONVERT(varchar(100), Cast(float_field as decimal(38, 0)))
Any help is appreciated.
UPDATE:
Sample value is 2.2000012095022E+26.
Try using the STR() function.
SELECT STR(float_field, 25, 5)
STR() Function
Another note: this pads on the left with spaces. If this is a problem combine with LTRIM:
SELECT LTRIM(STR(float_field, 25, 5))
The only query bit I found that returns the EXACT same original number is
CONVERT (VARCHAR(50), float_field,128)
See http://www.connectsql.com/2011/04/normal-0-microsoftinternetexplorer4.html
The other solutions above will sometimes round or add digits at the end
UPDATE: As per comments below and what I can see in https://msdn.microsoft.com/en-us/library/ms187928.aspx:
CONVERT (VARCHAR(50), float_field,3)
Should be used in new SQL Server versions (Azure SQL Database, and starting in SQL Server 2016 RC3)
this is the solution I ended up using in sqlserver 2012 (since all the other suggestions had the drawback of truncating fractional part or some other drawback).
declare #float float = 1000000000.1234;
select format(#float, N'#.##############################');
output:
1000000000.1234
this has the further advantage (in my case) to make thousands separator and localization easy:
select format(#float, N'#,##0.##########', 'de-DE');
output:
1.000.000.000,1234
SELECT LTRIM(STR(float_field, 25, 0))
is the best way so you do not add .0000 and any digit at the end of the value.
Convert into an integer first and then into a string:
cast((convert(int,b.tax_id)) as varchar(20))
Useful topic thanks.
If you want like me remove leadings zero you can use that :
DECLARE #MyFloat [float];
SET #MyFloat = 1000109360.050;
SELECT REPLACE(RTRIM(REPLACE(REPLACE(RTRIM(LTRIM(REPLACE(STR(#MyFloat, 38, 16), '0', ' '))), ' ', '0'),'.',' ')),' ',',')
float only has a max. precision of 15 digits. Digits after the 15th position are therefore random, and conversion to bigint (max. 19 digits) or decimal does not help you.
This can help without rounding
declare #test float(25)
declare #test1 decimal(10,5)
select #test = 34.0387597207
select #test
set #test1 = convert (decimal(10,5), #test)
select cast((#test1) as varchar(12))
Select LEFT(cast((#test1) as varchar(12)),LEN(cast((#test1) as varchar(12)))-1)
Try this one, should work:
cast((convert(bigint,b.tax_id)) as varchar(20))
select replace(myFloat, '', '')
from REPLACE() documentation:
Returns nvarchar if one of the input arguments is of the nvarchar data type; otherwise, REPLACE returns varchar.
Returns NULL if any one of the arguments is NULL.
tests:
null ==> [NULL]
1.11 ==> 1.11
1.10 ==> 1.1
1.00 ==> 1
0.00 ==> 0
-1.10 ==> -1.1
0.00001 ==> 1e-005
0.000011 ==> 1.1e-005
If you use a CLR function, you can convert the float to a string that looks just like the float, without all the extra 0's at the end.
CLR Function
[Microsoft.SqlServer.Server.SqlFunction(DataAccess = DataAccessKind.Read)]
[return: SqlFacet(MaxSize = 50)]
public static SqlString float_to_str(double Value, int TruncAfter)
{
string rtn1 = Value.ToString("R");
string rtn2 = Value.ToString("0." + new string('0', TruncAfter));
if (rtn1.Length < rtn2.Length) { return rtn1; } else { return rtn2; }
}
.
Example
create table #temp (value float)
insert into #temp values (0.73), (0), (0.63921), (-0.70945), (0.28), (0.72000002861023), (3.7), (-0.01), (0.86), (0.55489), (0.439999997615814)
select value,
dbo.float_to_str(value, 18) as converted,
case when value = cast(dbo.float_to_str(value, 18) as float) then 1 else 0 end as same
from #temp
drop table #temp
.
Output
value converted same
---------------------- -------------------------- -----------
0.73 0.73 1
0 0 1
0.63921 0.63921 1
-0.70945 -0.70945 1
0.28 0.28 1
0.72000002861023 0.72000002861023 1
3.7 3.7 1
-0.01 -0.01 1
0.86 0.86 1
0.55489 0.55489 1
0.439999997615814 0.439999997615814 1
.
Caveat
All converted strings are truncated at 18 decimal places, and there are no trailing zeros. 18 digits of precision is not a problem for us. And, 100% of our FP numbers (close to 100,000 values) look identical as string values as they do in the database as FP numbers.
Modified Axel's response a bit as it for certain cases will produce undesirable results.
DECLARE #MyFloat [float];
SET #MyFloat = 1000109360.050;
SELECT REPLACE(RTRIM(REPLACE(REPLACE(RTRIM((REPLACE(CAST(CAST(#MyFloat AS DECIMAL(38,18)) AS VARCHAR(max)), '0', ' '))), ' ', '0'),'.',' ')),' ','.')
Select
cast(replace(convert(decimal(15,2),acs_daily_debit), '.', ',') as varchar(20))
from acs_balance_details
Based on molecular's answer:
DECLARE #F FLOAT = 1000000000.1234;
SELECT #F AS Original, CAST(FORMAT(#F, N'#.##############################') AS VARCHAR) AS Formatted;
SET #F = 823399066925.049
SELECT #F AS Original, CAST(#F AS VARCHAR) AS Formatted
UNION ALL SELECT #F AS Original, CONVERT(VARCHAR(128), #F, 128) AS Formatted
UNION ALL SELECT #F AS Original, CAST(FORMAT(#F, N'G') AS VARCHAR) AS Formatted;
SET #F = 0.502184537571209
SELECT #F AS Original, CAST(#F AS VARCHAR) AS Formatted
UNION ALL SELECT #F AS Original, CONVERT(VARCHAR(128), #F, 128) AS Formatted
UNION ALL SELECT #F AS Original, CAST(FORMAT(#F, N'G') AS VARCHAR) AS Formatted;
I just came across a similar situation and was surprised at the rounding issues of 'very large numbers' presented within SSMS v17.9.1 / SQL 2017.
I am not suggesting I have a solution, however I have observed that FORMAT presents a number which appears correct. I can not imply this reduces further rounding issues or is useful within a complicated mathematical function.
T SQL Code supplied which should clearly demonstrate my observations while enabling others to test their code and ideas should the need arise.
WITH Units AS
(
SELECT 1.0 AS [RaisedPower] , 'Ten' As UnitDescription
UNION ALL
SELECT 2.0 AS [RaisedPower] , 'Hundred' As UnitDescription
UNION ALL
SELECT 3.0 AS [RaisedPower] , 'Thousand' As UnitDescription
UNION ALL
SELECT 6.0 AS [RaisedPower] , 'Million' As UnitDescription
UNION ALL
SELECT 9.0 AS [RaisedPower] , 'Billion' As UnitDescription
UNION ALL
SELECT 12.0 AS [RaisedPower] , 'Trillion' As UnitDescription
UNION ALL
SELECT 15.0 AS [RaisedPower] , 'Quadrillion' As UnitDescription
UNION ALL
SELECT 18.0 AS [RaisedPower] , 'Quintillion' As UnitDescription
UNION ALL
SELECT 21.0 AS [RaisedPower] , 'Sextillion' As UnitDescription
UNION ALL
SELECT 24.0 AS [RaisedPower] , 'Septillion' As UnitDescription
UNION ALL
SELECT 27.0 AS [RaisedPower] , 'Octillion' As UnitDescription
UNION ALL
SELECT 30.0 AS [RaisedPower] , 'Nonillion' As UnitDescription
UNION ALL
SELECT 33.0 AS [RaisedPower] , 'Decillion' As UnitDescription
)
SELECT UnitDescription
, POWER( CAST(10.0 AS FLOAT(53)) , [RaisedPower] ) AS ReturnsFloat
, CAST( POWER( CAST(10.0 AS FLOAT(53)) , [RaisedPower] ) AS NUMERIC (38,0) ) AS RoundingIssues
, STR( CAST( POWER( CAST(10.0 AS FLOAT(53)) , [RaisedPower] ) AS NUMERIC (38,0) ) , CAST([RaisedPower] AS INT) + 2, 0) AS LessRoundingIssues
, FORMAT( POWER( CAST(10.0 AS FLOAT(53)) , [RaisedPower] ) , '0') AS NicelyFormatted
FROM Units
ORDER BY [RaisedPower]