I don't want to use Symfony2 doctrine. Instead want to write own data classes to handle MySQL queries. So is there any way that directly sql queries can be executed. Most article in google talks about Doctrine or MySQL+Doctrine.
If you don't want to use Doctrine ORM or even Doctrine DBAL, absolutley nothing stopes you from using PDO/MySQLi directly.
Define PDO instance as DIC service:
<service id="pdo" class="PDO">
<argument>dns</argument>
<argument>user</argument>
<argument>password</argument>
<call method="setAttribute">
<argument>2</argument> <!-- use exception for error handling -->
</call>
</service>
Pass PDO instance for each service that requires database connection:
<service id="my.custom.service" class="My\Custom\Service">
<argument type="service" id="pdo" />
</serivce>
---
namespace My\Custom;
class Service {
public function __construct(PDO $pdo) { }
}
There's a cookbook about using Doctrine's DBAL Layer.
Related
Is it possible to configure a non-static value for the metadata field in the wildly json-formatter?
I didn't find anything about it in the wildfly documentation- it only has a simple static field example (meta-data=[#version=1])
For example, I would like to have a field "simpleClassName" - The class of the code calling the log method.
I also tried to use a similar syntax to pattern-formatter(example below) but it doesn't work
<formatter name="JSON">
<json-formatter>
<meta-data>
<property name="simpleClassName" value="%c{1}"/>
</meta-data>
</json-formatter>
</formatter>
No the meta-data is only static information. However what you're looking for seems to be the details of the caller. Note that this is an expensive operation should you should use it with caution. What you'd want to do is change the print-details to true. In CLI it would be something like:
/subsystem=logging/json-formatter=JSON:write-attribute(name=print-details, value=true)
I have two programs, one using OpenSplice 6.7.1 and the other using OpenDDS 3.10.
They are both using RTPS as protocol, the same domain id and the destination port (I verified using wireshark).
The problem is that they are not communicating.
I don't know if I am doing anything wrong with the config... I am using the basic config for OpenDDS with RTPS and for OpenSplice I used the provided ospl.xml after changing the domain ID.
Here are my config files.
For OpenDDS:
[common]
DCPSGlobalTransportConfig=$file
DCPSDefaultDiscovery=DEFAULT_RTPS
[transport/the_rtps_transport]
transport_type=rtps_udp
For OpenSplice:
<OpenSplice>
<Domain>
<Name>ospl_sp_ddsi</Name>
<Id>223</Id>
<SingleProcess>true</SingleProcess>
<Description>Stand-alone 'single-process' deployment and standard DDSI networking.</Description>
<Service name="ddsi2">
<Command>ddsi2</Command>
</Service>
<Service name="durability">
<Command>durability</Command>
</Service>
<Service name="cmsoap">
<Command>cmsoap</Command>
</Service>
</Domain>
<DDSI2Service name="ddsi2">
<General>
<NetworkInterfaceAddress>AUTO</NetworkInterfaceAddress>
<AllowMulticast>true</AllowMulticast>
<EnableMulticastLoopback>true</EnableMulticastLoopback>
<CoexistWithNativeNetworking>false</CoexistWithNativeNetworking>
</General>
<Compatibility>
<!-- see the release notes and/or the OpenSplice configurator on DDSI interoperability -->
<StandardsConformance>lax</StandardsConformance>
<!-- the following one is necessary only for TwinOaks CoreDX DDS compatibility -->
<!-- <ExplicitlyPublishQosSetToDefault>true</ExplicitlyPublishQosSetToDefault> -->
</Compatibility>
</DDSI2Service>
<DurabilityService name="durability">
<Network>
<Alignment>
<TimeAlignment>false</TimeAlignment>
<RequestCombinePeriod>
<Initial>2.5</Initial>
<Operational>0.1</Operational>
</RequestCombinePeriod>
</Alignment>
<WaitForAttachment maxWaitCount="100">
<ServiceName>ddsi2</ServiceName>
</WaitForAttachment>
</Network>
<NameSpaces>
<NameSpace name="defaultNamespace">
<Partition>*</Partition>
</NameSpace>
<Policy alignee="Initial" aligner="true" durability="Durable" nameSpace="defaultNamespace"/>
</NameSpaces>
</DurabilityService>
<TunerService name="cmsoap">
<Server>
<PortNr>Auto</PortNr>
</Server>
</TunerService>
</OpenSplice>
What am I doing wrong ?
Multi-vendor interoperability has been demonstrated repeatedly at OMG events but not recently, so maybe a regression has happened with/in either of the products.
Your OpenSplice configuration is (apart from domainId which should match the one used in your application where typically users use DDS::DOMAIN_ID_DEFAULT to indicate they want to use the ID as specified in the configuration as pointed to by the OSPL_URI environment variable) a proper default configuration. I'm sure you are aware that the AUTO setting of the to-be-used interface/IP-address is a potential source-of-confusion if you use multi-homed machines.
So next would be to look at both (DDSI)traces and/or wireshark captures and see if you spot DDSI wire-frames for both Vendors (1.2 for PrismTech, 1.3 for OCI).
When for instance there's no sign of vendor-1.3 being identified in OpenSplice DDSI-traces then that suggests there's still some 'fundamental' communication issues.
Note that at these OMG-events we typically used the (for us 'bundled') iShapes example on domain '0' and module-less IDL topic-type specification to verify interoperability, so it it doesn't work for your application that's something worth trying too (and check/use wireshark in combination with that example too)
I'll also keep watching the community-forum for new information on this ..
Java Couchbase client allows connecting to several nodes in a cluster (in case that one of them is not available)
Is is possible in Spring Data Couchbase?
I'm using Couchbase 2.1 and XML configuration for Spring
Yes, you can configure spring-data this way. When you configure the CouchbaseClient using the CouchbaseFactoryBean, it accepts a comma-delimited list of hosts. Here is an example of configuring the CouchbaseClient bean:
<couchbase:couchbase bucket="myBucket" password="" host="host1,host2,host3"/>
This is assuming you are using the 1.4.x couchbase-client.jar dependency, which as long as you are using spring-data 1.1.5, you are fine. You didn't specify your spring-data dependencies, but more than likely you should be good here.
The only way to do this in spring data couchbase > 2.x is:
A cluster with three servers and three buckets each one with an user and a password.
<couchbase:cluster id="cluster_info" env-ref="couchbaseEnv2" >
<couchbase:node>server1</couchbase:node>
<couchbase:node>server2</couchbase:node>
<couchbase:node>server3</couchbase:node>
</couchbase:cluster>
<couchbase:env id="couchbaseEnv2" connectTimeout="20000" computationPoolSize="10" />
<couchbase:clusterInfo cluster-ref="cluster_info" id="cluster1" login="user1" password="zzzzz1"/>
<couchbase:clusterInfo cluster-ref="cluster_info" id="cluster2" login="user2" password="zzzzz2"/>
<couchbase:clusterInfo cluster-ref="cluster_info" id="cluster3" login="user3" password="zzzzz3"/>
<couchbase:bucket id="bucket1" bucketName="user1" cluster-ref="cluster_info" bucketPassword="zzzzz1"/>
<couchbase:bucket id="bucket2" bucketName="user2" cluster-ref="cluster_info" bucketPassword="zzzzz2"/>
<couchbase:bucket id="bucket3" bucketName="user3" cluster-ref="cluster_info" bucketPassword="zzzzz3"/>
<couchbase:template id="couchBaseTemplate1" bucket-ref="bucket1" clusterInfo-ref="cluster1" />
<couchbase:template id="couchBaseTemplate2" bucket-ref="bucket2" clusterInfo-ref="cluster2" />
<couchbase:template id="couchBaseTemplate3" bucket-ref="bucket3" clusterInfo-ref="cluster3" />
I am using SpringBatch to write a csv-file to the database. This works just fine.
I am using a FlatFileItemReader and a custom ItemWriter. I am using no processor.
The import takes quite some time and on the UI you don't see any progress. I implemented a progress bar and got some global properties where i can store some information (like lines to read or current import index).
My question is: How can i get the number of lines from the csv?
Here's my xml:
<batch:job id="importPersonsJob" job-repository="jobRepository">
<batch:step id="importPersonStep">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="personItemReader"
writer="personItemWriter"
commit-interval="5"
skip-limit="10">
<batch:skippable-exception-classes>
<batch:include class="java.lang.Throwable"/>
</batch:skippable-exception-classes>
</batch:chunk>
<batch:listeners>
<batch:listener ref="skipListener"/>
<batch:listener ref="chunkListener"/>
</batch:listeners>
</batch:tasklet>
</batch:step>
<batch:listeners>
<batch:listener ref="authenticationJobListener"/>
<batch:listener ref="afterJobListener"/>
</batch:listeners>
</batch:job>
I already tried to use the ItemReadListener Interface, but this isn't possible as well.
if you need to know how many lines where read, it's available in spring batch itself,
take a look at the StepExecution
The method getReadCount() should give you the number you are looking for.
You need to add a step execution listener to your step in your xml configuration. To do that (copy/pasted from spring documentation):
<step id="step1">
<tasklet>
<chunk reader="reader" writer="writer" commit-interval="10"/>
<listeners>
<listener ref="chunkListener"/>
</listeners>
</tasklet>
</step>
where "chunkListner" is a bean of yours annotated with a method annotated with #AfterStep to tell spring batch to call it after your step.
you should take a look at the spring reference for step configuration
Hope that helps,
i'm new to Doctrine2 and like to know how i can tell Doctrine which namespace my entities use.
My current configuration is this.
All my entities are in namespace "project\entity".
So, everytime i want to obtain the entity "Color", i have to write:
$em->getRepository("project\\entity\\Color")
How can i configure Doctrine to always use namespace "project\entity"?
You can come close to what you want by using addEntityNamespace on your config object to create a namespace alias:
$em->getConfiguration()->addEntityNamespace('NS1', 'Project\Entity');
$colorRepo = $em->getRepository('NS1:Color');
Works for queries as well.
By the way, "project\\entity\\Color" can also be written as 'project\entity\Color'. I would also suggest capitalizing Project and Entity just to conform to standards.