Cannot get data out of a WCF Data Service - entity-framework-4.1

I set up a WCF Data Service http://localhost:65432/YeagerTechWcfService.svc and when I run it, I get the expected output below:
<?xml version="1.0" encoding="UTF-8" standalone="true"?>
<service xmlns="http://www.w3.org/2007/app" xmlns:app="http://www.w3.org/2007/app" xmlns:atom="http://www.w3.org/2005/Atom" xml:base="http://localhost:65432/YeagerTechWcfService.svc/">
<workspace>
<atom:title>Default</atom:title>
<collection href="Categories">
<atom:title>Categories</atom:title>
</collection>
<collection href="Customers">
<atom:title>Customers</atom:title>
</collection>
<collection href="Priorities">
<atom:title>Priorities</atom:title>
</collection>
<collection href="Projects">
<atom:title>Projects</atom:title>
</collection>
<collection href="Status">
<atom:title>Status</atom:title>
</collection>
<collection href="TimeTrackings">
<atom:title>TimeTrackings</atom:title>
</collection>
</workspace>
</service>
However, after executing the below method, I'm getting a js runtime error in the script: httpErrorPagesScripts.js when testing it out via the browser:
var bElement = document.createElement("A");
bElement.innerText = L_GOBACK_TEXT ;
bElement.href = "javascript:history.back();";
goBackContainer.appendChild(bElement);
The method that is executing is below after I put in the following query:
http://localhost:65432/YeagerTechWcfService.svc/Customers
public QueryOperationResponse<Customer> GetCustomers()
{
YeagerTechEntities DbContext = new YeagerTechEntities();
YeagerTechModel.YeagerTechEntities db = new YeagerTechModel.YeagerTechEntities();
DataServiceQuery<Customer> query = (DataServiceQuery<Customer>)
from customer in db.Customers
where customer.CustomerID > 0
select customer;
QueryOperationResponse<Customer> items = (QueryOperationResponse<Customer>)query.Execute();
db.Dispose();
return items;
}
Even if I set a breakpoint in the above method, it doesn't stop there. I just know that after I submit the query on the address bar, it goes into this method, and then pops out and executes that js error. I'm sure that I'm missing something..... Can someone help?
There is only 1 record coming back from the database, so the number of rows fetched is not an issue...
Note that this same type of query is successfully executed against an EF ORM model with a regular WCF Application Service. It's just that when I try to apply the same query using a WCF Data Service, I'm getting the error.

Related

BIML - 'AstTableNode' does not contain a definition for 'GetDropAndCreateDdl'

I am working on a BIML project to generate SSIS packages. I have a separate static class for utility methods.
I am attempting to call GetDropAndCreateDdl() to get the DDL from the souce to dynamically create a table in the destination. This should work in theory as it is referenced in multiple posts: here and here as samples.
When generating the BIML, running the sample code below, I receive an error: Error: 'AstTableNode' does not contain a definition for 'GetDropAndCreateDdl' and no accessible extension method 'GetDropAndCreateDdl' accepting a first argument of type 'AstTableNode' could be found
public static string GetDropAndCreateDDL(string connectionStringSource, string sourceTableName)
{
var sourceConnection = SchemaManager.CreateConnectionNode("Source", connectionStringSource);
var sourceImportResults = sourceConnection.ImportTableNodes(Nomenclature.Schema(sourceTableName),Nomenclature.Table(sourceTableName));
return sourceImportResults.TableNodes.ToList()[0].GetDropAndCreateDdl();
}
(Let's ignore the possibility of getting no table back or multiples for the sake of simplicity)
Looking at the varigence documentation, I don't see any reference to this method. This makes me think that there is a utility library that I am missing in my includes.
using Varigence.Biml.Extensions;
using Varigence.Biml.CoreLowerer.SchemaManagement;
What say you?
Joe
GetDropAndCreateDdl is an extension method in Varigence.Biml.Extensions.SchemaManagement.TableExtensions
ImportTableNodes returns an instance of
Varigence.Biml.CoreLowerer.SchemaManagement.ImportResults and the TableNodes is an IEnumerable of AstTableNodes
So, nothing weird there (like the table nodes in the import results being a different type)
I am not running into an issue if I have the code in-line with BimlExpress.
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<#
string connectionStringSource = #"Provider=SQLNCLI11;Data Source=localhost\dev2017;Integrated Security=SSPI;Initial Catalog=msdb";
var sourceConnection = SchemaManager.CreateConnectionNode("Source", connectionStringSource);
List<string> schemaList = new List<string>(){"dbo"};
var sourceImportResults = sourceConnection.ImportTableNodes("dbo", "");
WriteLine("<!-- {0} -->", sourceImportResults.TableNodes.Count());
//var sourceImportResults = sourceConnection.ImportTableNodes(schemaList,null);
var x = sourceImportResults.TableNodes.ToList()[0];
var ddl = x.GetDropAndCreateDdl();
WriteLine("<!-- {0} -->", sourceImportResults.TableNodes.FirstOrDefault().GetDropAndCreateDdl());
#>
</Biml>
The above code results in the following expanded Biml
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<!-- 221 -->
<!-- IF EXISTS (SELECT * from sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[autoadmin_backup_configuration_summary]') AND type IN (N'V'))
DROP VIEW [dbo].[autoadmin_backup_configuration_summary]
GO
CREATE VIEW [dbo].[autoadmin_backup_configuration_summary] AS
SELECT
ManagedBackupVersion,
IsAlwaysOn,
IsDropped,
IsEnabled,
RetentionPeriod,
EncryptionAlgorithm,
SchedulingOption,
DayOfWeek,
COUNT(*) AS DatabaseCount
FROM autoadmin_backup_configurations
GROUP BY
ManagedBackupVersion,
IsAlwaysOn,
IsDropped,
IsEnabled,
RetentionPeriod,
EncryptionAlgorithm,
SchedulingOption,
DayOfWeek
GO
-->
</Biml>

LINQ to XML- only return variables and variable values

Good day
I am calling an sms client (using C# API v2 [REST])that return xml results as follows:
<apiresult>
<data>
<credits>100</credits>
</data>
<callresult>
<result>True</result>
<error />
</callresult>
</apiresult>
By using LINQ to XML, I would like to return the variables to an object, i.e. credits: 100, result : true , and return it as JSON.
I have tried something to the following:
//Remove invalid chars
var legalchars = RemoveIllegalChars(Results);
XDocument po = XDocument.Parse(legalchars);
var list1 = po.Root.Descendants("apiresult");
without obtaining the desired result. Any help would be greatly appreciated.
You will need Newtonsoft.Json package, and i am using XElement:
XElement root = XElement.Parse(#"
<apiresult>
<data>
<credits>100</credits>
</data>
<callresult>
<result>True</result>
<error />
</callresult>
</apiresult>
");
var credits = root.Element("data").Element("credits").Value;
var result = root.Element("callresult").Element("result").Value;
JObject jsonObj = JObject.FromObject(
new {credits = credits, result = result}
);
Console.WriteLine (jsonObj.ToString());
Of course instead of writing it to console, return jsonObj.ToString()

PrototypeJS breaking SpahQL results

I'm having a hard time trying to use the SpahQL to query JSON in parallel with PrototypeJS.
The weird behavior is when I create a new instance of the SpahQL, the newly created object comes with several functions attached to it, but not by default, its attached by PrototypeJS.
Whatever a call select or any other method from SpahQL, those functions are interpreted as result objects and then added to the result set.
A simple example to explain my point:
`<script src="path/to/spahql-min.js" type="text/javascript"></script>`
`<script src="path/to/data.json" type="text/javascript"></script>`
`<script type="text/javascript">`
`var db = SpahQL.db(data);`
`sample = db.select("/*/*");`
`console.log(sample);`
Supose that data.json contains 50 entries, so the console.log will show an array of 50 objects.
But I include the PrototypeJS to the snippet the console.log will output an array of 1558 objects:
`<script src="path/to/prototype.js" type="text/javascript"></script>`
`<script src="path/to/spahql-min.js" type="text/javascript"></script>`
`<script src="path/to/data.json" type="text/javascript"></script>`
`<script type="text/javascript">`
`var db = SpahQL.db(data);`
`sample = db.select("/*/*");`
`console.log(sample);`

How to do Pagination with mybatis?

I am currently working on a ecommerce application where I have to show a list of available products using search functionality.
As with every search, I have to implement Pagination here.
I am using mybatis as my ORM tool and mysql as an underlying database.
Googling around I found following ways to accomplish this task :
Client Side paging
: Here I will have to fetch all the results from the database matching the search criteria in one stroke and handle the pagination at my code level (Possibly frond end code ).
Server Side Paging :
With mysql I can use the Limit and the offset of the resultset to construct a query like :
SELECT * FROM sampletable WHERE condition1>1 AND condition2>2 LIMIT 0,20
Here, I have to pass the offset and limit count everytime the user selects a new page while navigating in search results.
Can anyone tell,
which will be better way to implement paging ?
Do mybatis supports a better way to implement paging than just relying on above SQL queries ( like the hibernate criteria APIs).
Any inputs is highly appreaciated.
Thanks .
I myself use your second opion with LIMIT in sql query.
But there is range of methods that support pagination using RowBounds class.
This is well described in mybatis documentation here
Pay attention to correct result set type to use.
If you're using Mappers (much easier than using raw SqlSessions), the easiest way to apply a limit is by adding a RowBounds parameter to the mapping function's argument list, e.g:
// without limit
List<Foo> selectFooByExample(FooExample ex);
// with limit
List<Foo> selectFooByExample(FooExample ex, RowBounds rb);
This is mentioned almost as an afterthought in the link Volodymyr posted, under the Using Mappers heading, and could use some more emphasis:
You can also pass a RowBounds instance to the method to limit query results.
Note that support for RowBounds may vary by database. The Mybatis documentation implies that Mybatis will take care of using the appropriate query. However, for Oracle at least, this gets handled by very inefficient repeat calls to the database.
pagination has two types, physical and logical
logical means to retrieve all the data first then sort them in memory
physical means database level subset select
the default mybatis pagination is logical... thus when you select a massive database e.g 100GB of blobs, the rowbound method will still be very slow
the solution is to use the physical pagination
you can do your own way through the mybatis interceptor
or using plugins pre made by someone else
If you are using Spring MyBatis, you can achieve pagination manually using 2 MyBatis queries and the useful Spring Page and Pageable interfaces.
You create a higher level DAO interface e.g. UploadDao
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
public interface UploadDao {
Page<Upload> search(UploadSearch uploadSearch, Pageable pageable);
}
... where Upload maps to an upload table and UploadSearch is a parameter POJO e.g.
#Data // lombok
public class UploadSearch {
private Long userId;
private Long projectId;
...
}
An implementation of UploadDao (which injects a MyBatis UploadMapper mapper) is as follows:
public class DefaultUploadDao implements UploadDao {
#Autowired
private UploadMapper uploadMapper;
public Page<Upload> searchUploads(UploadSearch uploadSearch, Pageable pageable) {
List<Upload> content = uploadMapper.searchUploads(uploadSearch, pageable);
Long total = uploadMapper.countUploads(uploadSearch);
return new PageImpl<>(content, pageable, total);
}
}
The DAO implementation calls 2 methods of UploadMapper. These are:
UploadMapper.searchUploads - returns a page of results based on search param (UploadSearch) and Pageable param (contains offset / limit etc).
UploadMapper.countUploads - returns total count, again based on search param UploadSearch. NOTE - Pageable param is not required here as we're simply determining the total rows the search parameter filters to and don't care about page number / offset etc.
The injected UploadMapper interface looks like ...
#Mapper
public interface UploadMapper {
List<Upload> searchUploads(
#Param("search") UploadSearch search,
#Param("pageable") Pageable pageable);
long countUploads(
#Param("search") UploadSearch search);
}
... and the mapper XML file containing the dynamic SQL e.g. upload_mapper.xml contains ...
<mapper namespace="com.yourproduct.UploadMapper">
<select id="searchUploads" resultType="com.yourproduct.Upload">
select u.*
from upload u
<include refid="queryAndCountWhereStatement"/>
<if test="pageable.sort.sorted">
<trim prefix="order by">
<foreach item="order" index="i" collection="pageable.sort" separator=", ">
<if test="order.property == 'id'">id ${order.direction}</if>
<if test="order.property == 'projectId'">project_id ${order.direction}</if>
</foreach>
</trim>
</if>
<if test="pageable.paged">
limit #{pageable.offset}, #{pageable.pageSize}
</if>
<!-- NOTE: PostgreSQL has a slightly different syntax to MySQL i.e.
limit #{pageable.pageSize} offset #{pageable.offset}
-->
</select>
<select id="countUploads" resultType="long">
select count(1)
from upload u
<include refid="queryAndCountWhereStatement"/>
</select>
<sql id="queryAndCountWhereStatement">
<where>
<if test="search != null">
<if test="search.userId != null"> and u.user_id = #{search.userId}</if>
<if test="search.productId != null"> and u.product_id = #{search.productId}</if>
...
</if>
</where>
</sql>
</mapper>
NOTE - <sql> blocks (along with <include refid=" ... " >) are very useful here to ensure your count and select queries are aligned. Also, when sorting we are using conditions e.g. <if test="order.property == 'projectId'">project_id ${order.direction}</if> to map to a column (and stop SQL injection). The ${order.direction} is safe as the Spring Direction class is an enum.
The UploadDao could then be injected and used from e.g. a Spring controller:
#RestController("/upload")
public UploadController {
#Autowired
private UploadDao uploadDao; // Likely you'll have a service instead (which injects DAO) - here for brevity
#GetMapping
public Page<Upload>search (#RequestBody UploadSearch search, Pageable pageable) {
return uploadDao.search(search, pageable);
}
}
If you are using the MyBatis Generator, you may want to try the Row Bounds plugin from the official site: org.mybatis.generator.plugins.RowBoundsPlugin. This plugin will add a new version of the
selectByExample method that accepts a RowBounds parameter.

Unload images from MySQL to disk

I have images stored in MySQL as blobs (i know it's wrong). And there are many of them. Is there any fast way to drop them all on disk, like SELECT .. INTO OUTFILE, but to many files insted of one? Or the only way is writing a script that will iterate over rows and save images?
Since you want them to be saved into different files on the disk you'll have to go for a script.
#!/usr/bin/perl
#Note: it is my habit to name a Query Result $qR.
use strict;
use DBI;
my $dbh = DBI->connect(YOUR_INFO_HERE);
my $i = 0;
my $q = $dbh->prepare('select image from images');
while (my $qR = $q->fetchrow_arrayref) {
open(FILE,'>',"$i.jpg");
print FILE $qR[0];
close FILE;
$i++;
}
I had a similar requirement , I found in my case using Java + Hibernate (possibly similar task in other Hibernate variations, but not tried it), got me there quite quickly.
I set up a mapping like this:
<hibernate-mapping>
<class name="<com.package.table>" table="table">
<id column="pk" name="pk" type="int">
</id>
<property name="blobfield" type="blob"/>
</class>
</hibernate-mapping>
A Java bean to carry the data, something like:
package com.package;
import java.sql.Blob;
...
public class table {
...
public Blob getBlobfield {
...
And a loop something like this:
...
tx = session.beginTransaction();
Criteria crit = session.createCriteria(table.class);
crit.setMaxResults(50); // Alter this to suit...
List<table> rows = crit.list();
for (table r: rows) {
ExtractBlob(r.getId(), r.getBlobField);
}
And something ("ExtractBlob" is I'm calling this) to extract the blob (using the PK to generate a filename), something like this:
...
FileOutputStream fout=new FileOutputStream(<...base output file on PK for example...>
BufferedOutputStream bos=new BufferedOutputStream(fout);
InputStream is=blob.getBinaryStream();
byte[] b=new byte[8192];
while ( (is.read(b))>0 ) {
bos.write(b);
}
is.close();
bos.close()
;
...
I can post a more complete example if you looks like it might be useful - but I would have extract the code from a bigger project, otherwise I would have just posted it straight up.