Storing Apache Hadoop Data Output to Mysql Database - mysql

I need to store output of map-reduce program into database, so is there any way?
If so, is it possible to store output into multiple columns & tables based on requirement??
please suggest me some solutions.
Thank you..

The great example is shown on this blog, I tried it and it goes really well. I quote the most important parts of the code.
At first, you must create a class representing data you would like to store. The class must implement DBWritable interface:
public class DBOutputWritable implements Writable, DBWritable
{
private String name;
private int count;
public DBOutputWritable(String name, int count) {
this.name = name;
this.count = count;
}
public void readFields(DataInput in) throws IOException { }
public void readFields(ResultSet rs) throws SQLException {
name = rs.getString(1);
count = rs.getInt(2);
}
public void write(DataOutput out) throws IOException { }
public void write(PreparedStatement ps) throws SQLException {
ps.setString(1, name);
ps.setInt(2, count);
}
}
Create objects of previously defined class in your Reducer:
public class Reduce extends Reducer<Text, IntWritable, DBOutputWritable, NullWritable> {
protected void reduce(Text key, Iterable<IntWritable> values, Context ctx) {
int sum = 0;
for(IntWritable value : values) {
sum += value.get();
}
try {
ctx.write(new DBOutputWritable(key.toString(), sum), NullWritable.get());
} catch(IOException e) {
e.printStackTrace();
} catch(InterruptedException e) {
e.printStackTrace();
}
}
}
Finally you must configure a connection to your DB (do not forget to add your db connector on the classpath) and register your mapper's and reducer's input/output data types.
public class Main
{
public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
DBConfiguration.configureDB(conf,
"com.mysql.jdbc.Driver", // driver class
"jdbc:mysql://localhost:3306/testDb", // db url
"user", // username
"password"); //password
Job job = new Job(conf);
job.setJarByClass(Main.class);
job.setMapperClass(Map.class); // your mapper - not shown in this example
job.setReducerClass(Reduce.class);
job.setMapOutputKeyClass(Text.class); // your mapper - not shown in this example
job.setMapOutputValueClass(IntWritable.class); // your mapper - not shown in this example
job.setOutputKeyClass(DBOutputWritable.class); // reducer's KEYOUT
job.setOutputValueClass(NullWritable.class); // reducer's VALUEOUT
job.setInputFormatClass(...);
job.setOutputFormatClass(DBOutputFormat.class);
DBInputFormat.setInput(...);
DBOutputFormat.setOutput(
job,
"output", // output table name
new String[] { "name", "count" } //table columns
);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

Related

How to test keyedbroadcastprocessfunction in flink?

I am new to flink i am trying write junit test cases to test KeyedBroadCastProcessFunction. Below is my code ,i am currently calling the getDataStreamOutput method in TestUtils class and passing inputdata and patternrules to method once the input data is evaluated against list of pattern rules and if input data satisfy the condition i will get the signal and calling sink function and returning output data as string in getDataStreamOutput method
#Test
public void testCompareInputAndOutputDataForInputSignal() throws Exception {
Assertions.assertEquals(sampleInputSignal,
TestUtils.getDataStreamOutput(
inputSignal,
patternRules));
}
public static String getDataStreamOutput(JSONObject input, Map<String, String> patternRules) throws Exception {
env.setParallelism(1);
DataStream<JSONObject> inputSignal = env.fromElements(input);
DataStream<Map<String, String>> rawPatternStream =
env.fromElements(patternRules);
//Generate a key,value pair of set of patterns where key is pattern name and value is pattern condition
DataStream<Tuple2<String, Map<String, String>>> patternRuleStream =
rawPatternStream.flatMap(new FlatMapFunction<Map<String, String>,
Tuple2<String, Map<String, String>>>() {
#Override
public void flatMap(Map<String, String> patternRules,
Collector<Tuple2<String, Map<String, String>>> out) throws Exception {
for (Map.Entry<String, String> stringEntry : patternRules.entrySet()) {
JSONObject jsonObject = new JSONObject(stringEntry.getValue());
Map<String, String> map = new HashMap<>();
for (String key : jsonObject.keySet()) {
String value = jsonObject.get(key).toString();
map.put(key, value);
}
out.collect(new Tuple2<>(stringEntry.getKey(), map));
}
}
});
BroadcastStream<Tuple2<String, Map<String, String>>> patternRuleBroadcast =
patternStream.broadcast(patternRuleDescriptor);
DataStream<Tuple2<String, JSONObject>> validSignal = inputSignal.map(new MapFunction<JSONObject,
Tuple2<String, JSONObject>>() {
#Override
public Tuple2<String, JSONObject> map(JSONObject inputSignal) throws Exception {
String source =
inputSignal.getSource();
return new Tuple2<>(source, inputSignal);
}
}).keyBy(0).connect(patternRuleBroadcast).process(new MyKeyedBroadCastProcessFunction());
validSignal.map(new MapFunction<Tuple2<String, JSONObject>,
JSONObject>() {
#Override
public JSONObject map(Tuple2<String, JSONObject> inputSignal) throws Exception {
return inputSignal.f1;
}
}).addSink(new getDataStreamOutput());
env.execute("TestFlink");
}
return (getDataStreamOutput.dataStreamOutput);
}
#SuppressWarnings("serial")
public static final class getDataStreamOutput implements SinkFunction<JSONObject> {
public static String dataStreamOutput;
public void invoke(JSONObject inputSignal) throws Exception {
dataStreamOutput = inputSignal.toString();
}
}
I need to test different inputs with same broadcast rules but each time i am calling this function its again and again doing process from beginning take input signal broadcast data, is there a way i can broadcast once and keeping on sending the input to the method i explored i can use CoFlatMapFunction something like below to combine datastream and keep on sending the input rules while method is running but for this one of the datastream has to keep on getting data from kafka topic again it will overburden on method to load kafka utils and server
DataStream<JSONObject> inputSignalFromKafka = env.addSource(inputSignalKafka);
DataStream<org.json.JSONObject> inputSignalFromMethod = env.fromElements(inputSignal));
DataStream<JSONObject> inputSignal = inputSignalFromMethod.connect(inputSignalFromKafka)
.flatMap(new SignalCoFlatMapper());
public static class SignalCoFlatMapper
implements CoFlatMapFunction<JSONObject, JSONObject, JSONObject> {
#Override
public void flatMap1(JSONObject inputValue, Collector<JSONObject> out) throws Exception {
out.collect(inputValue);
}
#Override
public void flatMap2(JSONObject kafkaValue, Collector<JSONObject> out) throws Exception {
out.collect(kafkaValue);
}
}
I found a link in stackoverflow How to unit test BroadcastProcessFunction in flink when processElement depends on broadcasted data but this is confused me a lot
Any way i can only broadcast only once in Before method in test cases and keeping sending different kind of data to my broadcast function
You can use KeyedTwoInputStreamOperatorTestHarness in order to achieve this for example let's assume you have the following KeyedBroadcastProcessFunction where you define some business logic for both DataStream channels
public class SimpleKeyedBroadcastProcessFunction extends KeyedBroadcastProcessFunction<String, String, String, String> {
#Override
public void processElement(String inputEntry,
ReadOnlyContext readOnlyContext, Collector<String> collector) throws Exception {
//business logic for how you want to process your data stream records
}
#Override
public void processBroadcastElement(String broadcastInput, Context
context, Collector<String> collector) throws Exception {
//process input from your broadcast channel
}
Let's now assume your process function is stateful and is making modifications to the Flink internal state, you would have to create a TestHarness inside your test class to ensure you are able to keep track of the state during testing.
I would then create some unit tests using the following approach:
public class SimpleKeyedBroadcastProcessFunctionTest {
private SimpleKeyedBroadcastProcessFunction processFunction;
private KeyedTwoInputStreamOperatorTestHarness<String, String, String, String> testHarness;
#Before
public void setup() throws Exception {
processFunction = new SimpleKeyedBroadcastProcessFunction();
testHarness = new KeyedTwoInputStreamOperatorTestHarness<>(
new CoBroadcastWithKeyedOperator<>(processFunction, ImmutableList.of(BROADCAST_MAP_STATE_DESCRIPTOR)),
(KeySelector<String, String>) string -> string ,
(KeySelector<String, String>) string -> string,
TypeInformation.of(String.class));
testHarness.setup();
testHarness.open();
}
#After
public void cleanup() throws Exception {
testHarness.close();
}
#Test
public void testProcessRegularInput() throws Exception {
//processElement1 send elements into your regular stream, second param will be the event time of the record
testHarness.processElement1(new StreamRecord<>("Hello", 0));
//Access records collected during processElement
List<StreamRecord<? extends String>> records = testHarness.extractOutputStreamRecords();
assertEquals("Hello", records.get(0).getValue())
}
#Test
public void testProcessBroadcastInput() throws Exception {
//processElement2 send elements into your broadcast stream, second param will be the event time of the record
testHarness.processElement2(new StreamRecord<>("Hello from Broadcast", 0));
//Access records collected during processElement
List<StreamRecord<? extends String>> records = testHarness.extractOutputStreamRecords();
assertEquals("Hello from Broadcast", records.get(0).getValue())
}
}

how to save apache spark schema output in mysql database

Can anyone please tell me if there is any way in apache spark to store a JavaRDD on mysql database? I am taking input from 2 csv files and then after doing join operations on their contents I need to save the output(the output JavaRDD) in the mysql database. I am already able to save the output successfully on hdfs but I am not finding any information related to apache Spark-MYSQL connection. Below I am posting the code for spark sql. This might serve as a reference to those who are looking for an example for spark-sql.
package attempt1;
import java.io.Serializable;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.sql.api.java.JavaSQLContext;
import org.apache.spark.sql.api.java.JavaSchemaRDD;
import org.apache.spark.sql.api.java.Row;
public class Spark_Mysql {
#SuppressWarnings("serial")
public static class CompleteSample implements Serializable {
private String ASSETNUM;
private String ASSETTAG;
private String CALNUM;
public String getASSETNUM() {
return ASSETNUM;
}
public void setASSETNUM(String aSSETNUM) {
ASSETNUM = aSSETNUM;
}
public String getASSETTAG() {
return ASSETTAG;
}
public void setASSETTAG(String aSSETTAG) {
ASSETTAG = aSSETTAG;
}
public String getCALNUM() {
return CALNUM;
}
public void setCALNUM(String cALNUM) {
CALNUM = cALNUM;
}
}
#SuppressWarnings("serial")
public static class ExtendedSample implements Serializable {
private String ASSETNUM;
private String CHANGEBY;
private String CHANGEDATE;
public String getASSETNUM() {
return ASSETNUM;
}
public void setASSETNUM(String aSSETNUM) {
ASSETNUM = aSSETNUM;
}
public String getCHANGEBY() {
return CHANGEBY;
}
public void setCHANGEBY(String cHANGEBY) {
CHANGEBY = cHANGEBY;
}
public String getCHANGEDATE() {
return CHANGEDATE;
}
public void setCHANGEDATE(String cHANGEDATE) {
CHANGEDATE = cHANGEDATE;
}
}
#SuppressWarnings("serial")
public static void main(String[] args) throws Exception {
JavaSparkContext ctx = new JavaSparkContext("local[2]", "JavaSparkSQL");
JavaSQLContext sqlCtx = new JavaSQLContext(ctx);
JavaRDD<CompleteSample> cs = ctx.textFile("C:/Users/cyg_server/Documents/bigDataExample/AssetsImportCompleteSample.csv").map(
new Function<String, CompleteSample>() {
public CompleteSample call(String line) throws Exception {
String[] parts = line.split(",");
CompleteSample cs = new CompleteSample();
cs.setASSETNUM(parts[0]);
cs.setASSETTAG(parts[1]);
cs.setCALNUM(parts[2]);
return cs;
}
});
JavaRDD<ExtendedSample> es = ctx.textFile("C:/Users/cyg_server/Documents/bigDataExample/AssetsImportExtendedSample.csv").map(
new Function<String, ExtendedSample>() {
public ExtendedSample call(String line) throws Exception {
String[] parts = line.split(",");
ExtendedSample es = new ExtendedSample();
es.setASSETNUM(parts[0]);
es.setCHANGEBY(parts[1]);
es.setCHANGEDATE(parts[2]);
return es;
}
});
JavaSchemaRDD complete = sqlCtx.applySchema(cs, CompleteSample.class);
complete.registerAsTable("cs");
JavaSchemaRDD extended = sqlCtx.applySchema(es, ExtendedSample.class);
extended.registerAsTable("es");
JavaSchemaRDD fs= sqlCtx.sql("SELECT cs.ASSETTAG, cs.CALNUM, es.CHANGEBY, es.CHANGEDATE FROM cs INNER JOIN es ON cs.ASSETNUM=es.ASSETNUM;");
JavaRDD<String> result = fs.map(new Function<Row, String>() {
public String call(Row row) {
return row.getString(0);
}
});
result.saveAsTextFile("hdfs://path/to/hdfs/dir-name"); //instead of hdfs I need to save it on mysql database, but I am not able to find any Spark-MYSQL connection
}
}
Here at the end I am saving the result successfully in HDFS. But now I want to save into MYSQL database. Kindly help me out. Thanks
There are two approaches you can use for writing your results back to the database. One is to use something like DBOutputFormat and configure that, and the other is to use foreachPartition on the RDD you want to save and pass in a function which creates a connection to MySQL and writes the result back.
Here is an example using DBOutputFormat.
Create a class that represents your table row -
public class TableRow implements DBWritable
{
public String column1;
public String column2;
#Override
public void write(PreparedStatement statement) throws SQLException
{
statement.setString(1, column1);
statement.setString(2, column2);
}
#Override
public void readFields(ResultSet resultSet) throws SQLException
{
throw new RuntimeException("readFields not implemented");
}
}
Then configure your job and write a mapToPair function. The value doesn't appear to be used. If anyone knows, please post a comment.
String tableName = "YourTableName";
String[] fields = new String[] { "column1", "column2" };
JobConf job = new JobConf();
DBConfiguration.configureDB(job, "com.mysql.jdbc.Driver", "jdbc:mysql://localhost/DatabaseNameHere", "username", "password");
DBOutputFormat.setOutput(job, tableName, fields);
// map your rdd into a table row
JavaPairRDD<TableRow, Object> rows = rdd.mapToPair(...);
rows.saveAsHadoopDataset(job);

WcfFacility and Sequence contains no elements error?

I have wcf library with service contracts and implementations.
[ServiceContract]
public interface IServiceProtoType
{
[OperationContract]
Response GetMessage(Request request);
[OperationContract]
String SayHello();
}
[DataContract]
public class Request
{
private string name;
[DataMember]
public string Name
{
get { return name; }
set { name = value; }
}
}
[DataContract]
public class Response
{
private string message;
[DataMember]
public string Message
{
get { return message; }
set { message = value; }
}
}
public class MyDemoService : IServiceProtoType
{
public Response GetMessage(Request request)
{
var response = new Response();
if (null == request)
{
response.Message = "Error!";
}
else
{
response.Message = "Hello, " + request.Name;
}
return response;
}
public string SayHello()
{
return "Hello, World!";
}
}
I have windows service project that references this library, where MyService is just an empty shell that inherits ServiceBase. This service is installed and running under local system.
static void Main()
{
ServiceBase.Run(CreateContainer().Resolve());
}
private static IWindsorContainer CreateContainer()
{
IWindsorContainer container = new WindsorContainer();
container.Install(FromAssembly.This());
return container;
}
public class ServiceInstaller : IWindsorInstaller
{
#region IWindsorInstaller Members
public void Install(IWindsorContainer container, Castle.MicroKernel.SubSystems.Configuration.IConfigurationStore store)
{
string myDir;
if (string.IsNullOrEmpty(AppDomain.CurrentDomain.RelativeSearchPath))
{
myDir = AppDomain.CurrentDomain.BaseDirectory;
}
else
{
myDir = AppDomain.CurrentDomain.RelativeSearchPath;
}
var wcfLibPath = Path.Combine(myDir , "WcfDemo.dll");
string baseUrl = "http://localhost:8731/DemoService/{0}";
AssemblyName myAssembly = AssemblyName.GetAssemblyName(wcfLibPath);
container
.Register(
AllTypes
.FromAssemblyNamed(myAssembly.Name)
.InSameNamespaceAs<WcfDemo.MyDemoService>()
.WithServiceDefaultInterfaces()
.Configure(c =>
c.Named(c.Implementation.Name)
.AsWcfService(
new DefaultServiceModel()
.AddEndpoints(WcfEndpoint
.BoundTo(new WSHttpBinding())
.At(string.Format(baseUrl,
c.Implementation.Name)
)))), Component.For<ServiceBase>().ImplementedBy<MyService>());
}
#endregion
}
In Client Console app I have the following code and I am getting the following error:
{"Sequence contains no elements"}
static void Main(string[] args)
{
IWindsorContainer container = new WindsorContainer();
string baseUrl = "http://localhost:8731/DemoService/{0}";
container.AddFacility<WcfFacility>(f => f.CloseTimeout = TimeSpan.Zero);
container
.Register(
Types
.FromAssemblyContaining<IServiceProtoType>()
.InSameNamespaceAs<IServiceProtoType>()
.Configure(
c => c.Named(c.Implementation.Name)
.AsWcfClient(new DefaultClientModel
{
Endpoint = WcfEndpoint
.BoundTo(new WSHttpBinding())
.At(string.Format(baseUrl,
c.Name.Substring(1)))
})));
var service1 = container.Resolve<IServiceProtoType>();
Console.WriteLine(service1.SayHello());
Console.ReadLine();
}
I have an idea what this may be but you can stop reading this now (and I apologize for wasting your time in advance) if the answer to the following is no:
Is one (or more) of Request, Response, or MyDemoService in the same namespace as IServiceProtoType?
I suspect that Windsor is getting confused about those, since you are doing...
Types
.FromAssemblyContaining<IServiceProtoType>()
.InSameNamespaceAs<IServiceProtoType>()
... and then configuring everything which that returns as a WCF client proxy. This means that it will be trying to create proxies for things that should not be and hence a Sequence Contains no Elements exception (not the most useful message IMHO but crushing on).
The simple fix would be just to put your IServiceProtoType into its own namespace (I often have a namespace like XXXX.Services for my service contracts).
If that is not acceptable to you then you need to work out another way to identify just the service contracts - take a look at the If method for example or just a good ol' Component.For perhaps.

Ehcache hangs in test

I am in the process of rewriting a bottle neck in the code of the project I am on, and in doing so I am creating a top level item that contains a self populating Ehcache. I am attempting to write a test to make sure that the basic call chain is established, but when the test executes it hands when retrieving the item from the cache.
Here are the Setup and the test, for reference mocking is being done with Mockito:
#Before
public void SetUp()
{
testCache = new Cache(getTestCacheConfiguration());
recordingFactory = new EntryCreationRecordingCache();
service = new Service<Request, Response>(testCache, recordingFactory);
}
#Test
public void retrievesResultsFromSuppliedCache()
{
ResultType resultType = mock(ResultType.class);
Response expectedResponse = mock(Response.class);
addToExpectedResults(resultType, expectedResponse);
Request request = mock(Request.class);
when(request.getResultType()).thenReturn(resultType);
assertThat(service.getResponse(request), sameInstance(expectedResponse));
assertTrue(recordingFactory.requestList.contains(request));
}
private void addToExpectedResults(ResultType resultType,
Response response) {
recordingFactory.responseMap.put(resultType, response);
}
private CacheConfiguration getTestCacheConfiguration() {
CacheConfiguration cacheConfiguration = new CacheConfiguration("TEST_CACHE", 10);
cacheConfiguration.setLoggingEnabled(false);
return cacheConfiguration;
}
private class EntryCreationRecordingCache extends ResponseFactory{
public final Map<ResultType, Response> responseMap = new ConcurrentHashMap<ResultType, Response>();
public final List<Request> requestList = new ArrayList<Request>();
#Override
protected Map<ResultType, Response> generateResponse(Request request) {
requestList.add(request);
return responseMap;
}
}
Here is the ServiceClass
public class Service<K extends Request, V extends Response> {
private Ehcache cache;
public Service(Ehcache cache, ResponseFactory factory) {
this.cache = new SelfPopulatingCache(cache, factory);
}
#SuppressWarnings("unchecked")
public V getResponse(K request)
{
ResultType resultType = request.getResultType();
Element cacheEntry = cache.get(request);
V response = null;
if(cacheEntry != null){
Map<ResultType, Response> resultTypeMap = (Map<ResultType, Response>) cacheEntry.getValue();
try{
response = (V) resultTypeMap.get(resultType);
}catch(NullPointerException e){
throw new RuntimeException("Result type not found for Result Type: " + resultType);
}catch(ClassCastException e){
throw new RuntimeException("Incorrect Response Type for Result Type: " + resultType);
}
}
return response;
}
}
And here is the ResponseFactory:
public abstract class ResponseFactory implements CacheEntryFactory{
#Override
public final Object createEntry(Object request) throws Exception {
return generateResponse((Request)request);
}
protected abstract Map<ResultType,Response> generateResponse(Request request);
}
After wrestling with it for a while, I discovered that the cache wasn't being initialized. Creating a CacheManager and adding the cache to it resolved the problem.
I also had a problem with EHCache hanging, although only in a hello-world example. Adding this to the end fixed it (the application ends normally).
CacheManager.getInstance().removeAllCaches();
https://stackoverflow.com/a/20731502/2736496

Trouble Passing Parameter to LinqToSql Stored Procedure

public IEnumerable<T> ExecuteStoredProcedure<T>(params object[] parameters)
{
Type genericType = typeof(T);
string commandthing = genericType.Name.Replace("Result", "");
//_db is my Linq To Sql database
return _db.ExecuteQuery<T>(commandthing, parameters).AsEnumerable();
}
The stored procedure is named GetOrder and has a single int parameter of orderid. I'm calling the above like so:
SqlParameter parm1 = new SqlParameter("#orderid", SqlDbType.Int);
parm1.Value = 123;
var results =
_session.ExecuteStoredProcedure<GetOrderResult>(parm1).Single();
I'm receiving the following error: A query parameter cannot be of type 'System.Data.SqlClient.SqlParameter'
Thoughts? Or am I just missing something obvious?
Update: I'm trying to make this as generic as possible...my current thinking is that I'm going to have to do some string trickery to create the ExecuteQuery text and parameters.
Update: Posting below my Session Interface and my Linq to Sql Implementation of the interface...hopefully that will clarify what I'm attempting to do
public interface ISession : IDisposable
{
void CommitChanges();
void Delete<T>(Expression<Func<T, bool>> expression) where T : class;
void Delete<T>(T item) where T : class;
void DeleteAll<T>() where T : class;
T Single<T>(Expression<Func<T, bool>> expression) where T : class;
IQueryable<T> All<T>() where T : class;
void Add<T>(T item) where T : class;
void Add<T>(IEnumerable<T> items) where T : class;
void Update<T>(T item) where T : class;
IEnumerable<T> ExecuteStoredProcedure<T>(params object[] parameters);
}
public class LinqToSqlSession : ISession
{
public readonly Db _db;
public LinqToSqlSession()
{
_db = new Db(ConfigurationManager.ConnectionStrings[Environment.MachineName].ConnectionString);
}
public void CommitChanges()
{
_db.SubmitChanges();
}
/// <summary>
/// Gets the table provided by the type T and returns for querying
/// </summary>
private Table<T> GetTable<T>() where T : class
{
return _db.GetTable<T>();
}
public void Delete<T>(Expression<Func<T, bool>> expression) where T : class
{
var query = All<T>().Where(expression);
GetTable<T>().DeleteAllOnSubmit(query);
}
public void Delete<T>(T item) where T : class
{
GetTable<T>().DeleteOnSubmit(item);
}
public void DeleteAll<T>() where T : class
{
var query = All<T>();
GetTable<T>().DeleteAllOnSubmit(query);
}
public void Dispose()
{
_db.Dispose();
}
public T Single<T>(Expression<Func<T, bool>> expression) where T : class
{
return GetTable<T>().SingleOrDefault(expression);
}
public IEnumerable<T> ExecuteStoredProcedure<T>(params object[] parameters)
{
Type genericType = typeof(T);
string commandstring = genericType.Name.Replace("Result", "");
//_db is my Linq To Sql database
return _db.ExecuteQuery<T>(commandstring, parameters).AsEnumerable();
}
public IQueryable<T> All<T>() where T : class
{
return GetTable<T>().AsQueryable();
}
public void Add<T>(T item) where T : class
{
GetTable<T>().InsertOnSubmit(item);
}
public void Add<T>(IEnumerable<T> items) where T : class
{
GetTable<T>().InsertAllOnSubmit(items);
}
public void Update<T>(T item) where T : class
{
//nothing needed here
}
}
That isn't how you're supposed to wire up Stored Procedures with Linq-to-SQL. You should extend the DataContext and use ExecuteMethodCall instead:
Taken from MSDN:
public partial class MyDataContext
{
[Function()]
public IEnumerable<Customer> CustomerById(
[Parameter(Name = "CustomerID", DbType = "NChar(5)")]
string customerID)
{
IExecuteResult result = this.ExecuteMethodCall(this,
((MethodInfo)(MethodInfo.GetCurrentMethod())),
customerID);
return (IEnumerable<Customer>)(result.ReturnValue);
}
}
If you really must execute a sproc as a query (highly not recommended), then you have to preface the command with EXEC, and don't use SqlParameter either, the call would look like:
var results = context.ExecuteQuery<MyResult>("EXEC usp_MyProc {0}, {1}",
custID, custName);
(And I'll note, pre-emptively, that this is not a SQL injection vector because Linq to SQL turns the curly braces into a parameterized query.)
Read about how to call sprocs in linq to sql
http://weblogs.asp.net/scottgu/archive/2007/08/16/linq-to-sql-part-6-retrieving-data-using-stored-procedures.aspx
Had the same Problem. The following approach worked 4 me.
public interface IBusinessEntityRepository
{
.......
object CallStoredProcedure(string storedProcedureName, object[] parameters);
}
implementation in my linqtosql GenericLinqRepository
public object CallStoredProcedure(string storedProcedureName, object[] parameters)
{
DataContext dataContext = GetCurrentDataContext();
MethodInfo method = dataContext.GetType().GetMethod(storedProcedureName);
return method.Invoke(dataContext, parameters);
}
I'm sure there is a better way to do this...but this is presently working:
public IEnumerable<T> ExecuteStoredProcedure<T>(params object[] parameters)
{
Type genericType = typeof(T);
StringBuilder sb=new StringBuilder();
sb.Append("EXEC ");
sb.Append(genericType.Name.Replace("Result", " " ));
for (int i = 0; i < parameters.Count(); i++)
{
sb.Append("{" + i.ToString() + "} ");
}
string commandstring = sb.ToString();
return _db.ExecuteQuery<T>(commandstring, parameters);
}
It's a little bit brittle in that your parameters must be set up in the proper order, and it's probably offensive to some...but it does accomplish the goal.
You can use this instead:
new SqlParameter { ParameterName = "UserID", Value =txtuserid.Text }
This equivalent in System.Data.SqlClient to :
SqlParameter[] param=new SqlParameter[2];
param[0]=new SqlParameter("#UserID",txtuserid)