windows phone 8.1 database general questions - windows-phone-8

I'm new to WP8.1 developing.
My application is using now a SQL CE database in isostore - my DBConnectionString - "isostore:/SpaceQuiz.sdf", and everything works properly.
But here I have some questions:
1) If my sdf file is in isostore - it will be added to my xap file after deployment?
2) I want to add manually some data to this database (about 1000 rows, exemplary scheme - ID | NAME | ISDONE | TIME).
Some cells in this database will be updated by my application (ISDONE | TIME). Also in future, I want to add next 1000 rows for example. What is the best approach to achieve this? How to prevent to restart data in my sdf file?

Related

Copy images from SQL image field to Azure Blob Storage as blob blocks

I have a SQL table where I have stored a lot of images, something with the following structure:
CREATE TABLE [dbo].[DocumentsContent](
[BlobName] [varchar](33) NULL,
[Content] [varbinary](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
I need to copy all images from database to Azure Blobs. The problem is that there are 3TB of images, so a script that will read them one by one (from SQL) and copy to Azure is not the desired solution.
I've tried with SSIS and Data Factory, but both are creating only one file with all information, not one file for each row how I need (or at lest in the way I did).
There is any tool that can do in a decent time? Or there is any way to used SSIS or Data Factory for this?
Thanks!
You can use 2 activities in Azure data factory to achieve it:
use a lookup activity to get the count of rows
use foreach activity with the rowcount as input and since each row is independent ,you can keep parallel execution enabled.
Within foreach,use copy activity with source as SQL with filter condition to rownumber and destination as blob.
This would generate individual files for each row and would occur parallely
Other Ideas:
Option 1)
SSIS Data Flow Task - "Export Column" to Local Disk
Use Filezilla Pro (or eqivalent) to Multithread Transfer to Azure
Option 2)
Mount Azure Blob via NFS3.0
SSIS Data Flow Task - "Export Column" to Mounted Disk
Split workload for parallel execution
WHERE CHECKSUM([BlobName]) % 5 = 0
WHERE CHECKSUM([BlobName]) % 5 = 1
WHERE CHECKSUM([BlobName]) % 5 = 2
WHERE CHECKSUM([BlobName]) % 5 = 3
WHERE CHECKSUM([BlobName]) % 5 = 4

Maintaing test users in cucumber steps

In my tests I have to work with different types of users and environments. At the moment I am manually updating the users since we don't have many features. However we will be adding many new features that will make it very difficult to update all files manually. Most of these are needed in the Given step. Example:
Scenario:
Given I am signed in as "user1#example.com"
I would like to change this to:
Scenario
Given I am signed in as "user1"
"user1" could stored in a csv file or in a db. Can either of these be done? If so which is the recommended method?
The CSV file would have something like:
user1,user1#example.com
user2,user2#example.com
user3,user3#example.com
A table in a db:
| id | user | email |
| 1 | user1 | user1#example.com |
| 2 | user2 | user2#example.com |
Seems using the db might be easier to maintain if it can be done. As always your help is appreciated.
The usual way to abstract test case details in Cucumber is through the use of "Scenario Outlines":
https://github.com/cucumber/cucumber/wiki/Scenario-Outlines
Using a Scenario Outline is equivalent to storing test case data in a CSV file, but it has the advantage of keeping the test case info right there in the .feature file.
If you follow this convention, all parts of the test workflow can be edited in the same place - this actually makes maintenance of the test cases easier than if the test outline and the individual test cases are segregated into separate text files (or segregated between a .feature file and a database instance).

Creating / Appending a Flat File Destination based on date.

The Backstory:
I have a process that loads physician demographic data into our system. This data can come in at any time and at any interval between updates. The data is what we call "Term-by-Exclusion", meaning that the source file takes precedence, and any physician record in the db that is not in the source file is marked as "Termed" or Inactive.
The Problem:
I need to be able to output the data from the source data, into a flat file destination as a daily report to a companion COBOL system. The source data is loaded into an ETL.PhysicianLoad table prior to processing and the ETL table is wiped prior to each new processing transaction, so retaining a full days' records is not possible as it stands now, without the output file.
Example: ProcessOutput_10152013.txt
The output file ideally needs to be a comprehensive of the entire days' processing. Meaning I want to continuously append to that days' file until the end of that day, then email a notification stating the file is ready for pickup. Any data that comes in after the turn of the day should then be placed in newly created file.
Output should look like this (no headers)
BatchID | LastName | FirstName | MiddleInitial | Date
0001 | Smith | John | A | 10/15/13
0001 | Smith | Sue | R | 10/15/13
0001 | Zeller | Frank | L | 10/15/13
0002 | Peters | Paula | D | 10/15/13
0002 | Rivers | Patrick | E | 10/15/13
0002 | Waters | Oliver | G | 10/15/13
What I am thinking:
I am thinking about using a CurrentDate Variable that will hold the current date comparing it to an expression based variable called FileName which will concatenate the current mmddyyyy to "ProcessOutput_.txt". My thinking is that I should be able to locate a file with that name in the destination folder and if it exists, I should be able to write to it. Otherwise I will have to create a new file. I can then set my Flat File Destination via expression to the FileName Variable.
Can anyone see a better way of doing this or any issues that may arise from this solution I am not seeing?
My thought process was in the right place, but flawed.
Here is how I solved the problem.
After trying to build my control/data flows using the logic in the original question, I discovered that I was working myself into a corner.
So that got me thinking again, how can I do this the easiest possible way
First, do I have the correct Variables defined? No..
CurrentDate - has to be there to define the date portion of the file name.
FileName - has to be present for obvious reasons.
So what did I miss?
FileExists (Type: boolean) - Something that will identify the existence of the file.
PlaceholderFile (Type: String) - Generic FileName Variable
Now what to do with it?
Add a VB Script Task to the control flow, that sets the FileExists flag.
'Check to see if ProspectivePhysician_<currentdate>.txt exists.
Dts.Variables("User::FileExists").Value = File.Exists(Dts.Variables("User::FileName").Value.ToString)
Now that we have the existence of the destination file defined, create the data flow object from the source table. Checking the FileExists Variable in a conditional split. Seperating the data flow into two branches. Create two Flat File Destinations called "Existing" and "New", setting them both to the same flat file location for the time being.
If you attempt to run the package at this point, you will receive Validation Errors from one of the two destinations, as the first is holding ownership of the file and will not allow the second to validate the file.
How to fix this...Use Expressions to swap the actual FileName value back and forth.
For the Existing Flat File Connection String Value, use the following Expression:
#[User::FileExists] == True ? #[User::FileName] : #[User::PlaceholderFile]
For the New Flat File Connection String value, use the following Expression:
#[User::FileExists] == True ? #[User::PlaceholderFile] : #[User::FileName]
Finally, Right click on each of the Flat File Destination Objects in the Data Flow and set the Overwrite property to True on the New Flat File Destination, and False on the Existing Destination. This will assure that the Append action is used on the existing file.

Classic ASP/MySQL - Parameter Issue

I have been trying to insert a huge text-editor string in to my database. The application I'm developing allows my client to create and edit their website terms and conditions from the admin part of their website. So as you can imagine, they are incredibly long. I have got to 18,000+ characters in length and I have now received an error when trying to add another load of text.
The error I am receiving is this:
ADODB.Command error '800a0d5d'
Application uses a value of the wrong type for the current operation
Which points to this part of my application, specifically the Set newParameter line:
Const adVarChar = 200
Const adParamInput = 1
Set newParameter = cmdConn.CreateParameter("#policyBody", adVarChar, adParamInput, Len(policyBody), policyBody)
cmdConn.Parameters.Append newParameter
Now this policy I am creating, that is currently 18,000+ characters in length, is only half complete, if that. It could jump to 50 - 60,000! I tried using adLongVarChar = 201 ADO type but this still didn't fix it.
Am I doing the right thing for such a large entry? If I am doing the right thing, how can I fix this issue? ...or if I'm doing the wrong thing, what is the right one?
Try to avoid putting documents in your database if you can. Sometimes it's a reasonable compromise, serialised objects, mark up snippets and such.
If you don't want to query the document with sql the only benefit is the all in one place thing. ie back up your db, you back up your documents as well, and you can use your db connectivity exclusively.
That said nothing is free, carting all that stuff about in your database costs you.
If you can.
have a documents table, User name for the file, and internal name in your documents directory, so the file name is unique in the file system, and a path description, if there could be more than one.
Then just upload and download the selected document as a file, on a get or set of the related database entity.
You'll need to dal with deployment issues, document directory exists, and the account you are running mysql daemon as can see it, but most of the time, the issues you have keeping documents seperate fromthe db, are much easier to deal with than the head scratchers you are running into now.

SSIS 2008 Excel Source - Problems loading Alphanumberic columns

I am using SSIS 2008 to load alphanumeric columns from Excel.
I have one column which starts off as integer
1
2
...
999
Then changes to AlphaNumeric
A1
A2
A999
When I try to load using using an Excel Data Source, excel will always say that it is an integer as it must only sample the top of the file.
(BTW - I know that I can re-order the file so that the alphas are at the top but I would rather not have to do this...)
Unfortunately you can't seem to be able to change its mind. This means that when it loads the data, it filters out the 'A' and the A999 record will update the 999 record. This is obviously not good...
I have tried to change the external and output columns to string under the advanced editing options, but I get errors and it won't run until you set the columns back to integer.
Does anyone have a solution?
SSIS uses Jet to access the Excel files. By default, Jet scans the first 8 rows of your data to determine the type of each column.
To fix it, you will need to edit the registry to increase the TypeGuessRows DWORD value of one of the following registry keys to determine how many rows to scan in your data:
It depends on what version of Windows and what version of excel ... as follows:
For 32-bit Windows
Excel 97
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\3.5\Engines\Excel
Excel 2000 and later versions
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel
For 64-bit Windows
Excel 97
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Jet\3.5\Engines\Excel
Excel 2000 and later versions
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Jet\4.0\Engines\Excel
Then, specify IMEX=1 in the connection string as follows:
Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\abc.xls;
Extended Properties="EXCEL 8.0;HDR=YES;IMEX=1";
This information can be found in a more verbose form at: http://support.microsoft.com/kb/189897/
Wow, that looks like a huge pain. I came across a couple of examples where you could alter the connection string and sometimes get better results but they don't seem to work for everyone.
Scripting an automatic conversion to an .csv file would be a good workaround, there are a number of suggestions in this thread:
converting an Excel (xls) file to a comma separated (csv) file without the GUI
including some code in C# that you may be able to easily plop in:
http://jarloo.com/code/api-code/excel-to-csv/
here is the simiar question where altering the connection string is discussed if you want to look into it for yourself: SSIS Excel Import Forcing Incorrect Column Type
Good luck!