is there a way for exporting WAS profile which has multiple nodes? exportWasprofile only works for base servers with single node. WAS v6.1 is the version of the product.
It depends on what you are trying to accomplish, but the manageprofiles.sh command line utility and its -backupProfile and -restoreProfile options might do the trick. You can find it the *app_server_root*/bin directory
Related
I have a project where I should read xml files with Apache Drill to process it , can someone tell me how I can configure it?
NB: I use Mapr distribution
I tried to add the configuration to the configuration UI but I get a error(see image)
enter image description here
Thanks in advance
You'll need to use a Drill distribution based on Apache Drill >= 1.19 for the XML format plugin.
So this is more of a Drill question than a MapR question.
There are two key steps here
make sure that Drill can access whatever you use to store your data (sounds your data is xml files in MapR (which is now called HPE Ezmeral Data Fabric))
make sure that Drill can understand the data you have. I am not current on Drill, but reading many kinds of XML should be doable.
For getting access, there are two major paths to accessing files on Ezmeral Data Fabric. One path is to mount the data fabric as a conventional file system on all the nodes running Drillbits. This is often done using NFS mounts, but can also be the FUSE driver provided with data fabric.
The other major approach to getting data access is to use the HDFS API framework to access data via maprfs://... path names. This requires installing the data fabric client on all of the nodes running Drillbits.
It sounds like you are running the version of Drill that is packaged with the old MapR or current HPE Ezmeral system. This is the easiest approach since the packaged version is integrated with the client libraries needed to use the HDFS API with maprfs:// resources (it also provides access to the tables and streams in the data fabric).
I have a client who wants a control panel for the app I am developing them. The control panel is a Mac OSX application that allows the user to submit files (excel docs and such) to my MySQL database. Those files are then checked by the iOS app I have created for them.
I have no idea how to do this. I have the MySQL database all set up, and I have looked everywhere for a solution. Any help is appreciated.
I wouldn't try to connect to your MySQL database directly from your cell phone. It's a bad design for several reasons. Instead build a API on the same server as the MySQL database. It doesn't matter if you do it in java, php, c# or anything else. You might even find some product or open source project that can do this automatically. I've listed some benefits of doing it this way
It makes testing easier. You can write a test framework against your API that doesn't rely on or is using a phone.
It makes development faster. You don't need to emulate or use a phone to develop and test your table design and queries.
It gives you compatibility. When you need to change your database (and you do) you can create new APIs that the new version of the app uses while and old version still out there can continue to use the old API (that you might have to modify to still provide the same functionallity)
It gives you flexibility. If your user base grows and you might need to have replication for reads or sharded databases you build that into the API instead of into the app which is just a better way to do it.
One option would be to use PHP to handle all the database interaction.
Host the scripts on the server and just have the apps call them and get the scripts to return some sort of parseable response (I'd go for JSON).
I have never found a suitable Object-C based connector for MySQL. At this point I would suggest using a C/C++ connector. There's lots of examples of how to configure the connector for both C and C++. The hard part will be all of the data passed from the MySQL code and the Object-C code will that it will have to be in C types.
EDIT: An Example
Back in the MSSQL 2000 timeline, there was an IIS integration layer that allowed HTTP GET commands to make select statements, and there were other SqlXml niceties that worked (not that fast or well but they worked) out of the box. I gave a chance to expose database stuff fairly quickly.
What is the comparable technology for MSSQL 2008/2012? I saw slashDb (http://www.slashdb.com/) and it seems to do that, but I am trying to understand the other options out there. Just SQL Server crud and sproc access.
Thanks.
Yes, SlashDB does exactly that and more. Full disclosure: I am the founder and CEO.
Once SlashDB is installed you would use its web interface to connect it with your database. Depending which database login and database schema you use for that connection, you will have the tables and views from that schema turned into of URL endpoints.
Those URLs can be followed in the browser but they are also API endpoints in JSON, XML or CSV. It works for reading and writing (you can control that in user configuration).
In addition to that you can define a set of parameterized SQL queries. Each query is given a name and instantly becomes an API endpoint too.
In order to help you getting started easily SlashDB is available on AWS and Azure marketplaces, as a Docker container from DockerHub, pre-built virtual machines or as .rpm and .deb packages for installation directly on Linux.
For more technical info please visit: https://docs.slashdb.com
The nearest equivalent may be SOAP/HTTP endpoints, however Microsoft has deprecated them for various reasons and recommends WCF or ASP.NET instead. Although the simplest way to get a quick CRUD setup is probably to use a framework or ORM that generates it for you, like LINQ to SQL or whatever else suits your needs.
Is there a way to use a MYSQL database without the database management system.. Like use tables offline without installing the db management system on the machine..
If there is can you please point me in the right direction?
Thank you!
As far as I know, there is no way to do this.
However, there is a portable DBMS SQLIte. It comes in different ways and can be used on other platform with different programming languages.
After reading your comment, I'm almost sure, this is what you need.
It's not that fast as MySQL I guess, but it works.
You can use The embedded MySQL Server Library to access MySQL data files without running the MySQL server.
You can setup a database to work on your localhost. This will be offline unless you setup the front-end stuff to let the internet interact with it.
What exactly do you mean "without the database management system"? You always need a way of interacting with it, even if it is offline. (Otherwise how can it work for you?)
The server side piece of the application, mysql-server, is needed at a minumum to run mysql. This server application comes with all the tools built-in to manage the instance. I doubt you can prevent installation of this.
If you've actually opened the table files in a hex or text editor, you'll see that you will definitely need the mysql application installed to make any sense of them to use them. Sure the records are all there in plain text (.myd files for myisam, the ibdata1 file for innodb tables), but it would be a complete time-waster devising a custom app to parse or update the file structure, as well as trying to tie in table structure contained in the related files for each table.
I'm working with another dev and together we're building out a MySQL database. We've each got our own local instances of MySQL 5.1 on our dev machines. We've not yet been able to identify a way for us to be able to make a local schema change (eg: add a field and some values for that field) and then export some kind of script or diff file that the other can import in. I've looked into Toad and Navicat's synchronization features but they seem oriented towards synchronizing between two instances, not an instance and an intermediate file. We thought MySQL Workbench would be great but this but the synchronization feature just seems plain broken. Any other ideas? How do you collaborate with others on the schema?
First of all put your final SQL schema into version control. So you'll always have a version of it with all changes. It can be a plain SQL file. Every developer in the team can use it as starting point to created his copy database. All changes must be applied to it. This will help you to find conflicts faster.
Also I used such file to create a test database to run unit-tests after each submit. So we were always sure that production code is working.
Then you can use any migration tool to move changed between developers. Here is similar question about this:
Mechanisms for tracking DB schema changes
If you're using PHP then look at Doctrine migrations.